• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 7
  • 5
  • 3
  • 3
  • 1
  • Tagged with
  • 38
  • 13
  • 12
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Extraction d'arguments de relations n-aires dans les textes guidée par une RTO de domaine / Extraction of arguments in N-ary relations in texts guided by a domain OTR

Berrahou, Soumia Lilia 29 September 2015 (has links)
Aujourd'hui, la communauté scientifique a l'opportunité de partager des connaissances et d'accéder à de nouvelles informations à travers les documents publiés et stockés dans les bases en ligne du web. Dans ce contexte, la valorisation des données disponibles reste un défi majeur pour permettre aux experts de les réutiliser et les analyser afin de produire de la connaissance du domaine. Pour être valorisées, les données pertinentes doivent être extraites des documents puis structurées. Nos travaux s'inscrivent dans la problématique de la capitalisation des données expérimentales issues des articles scientifiques, sélectionnés dans des bases en ligne, afin de les réutiliser dans des outils d'aide à la décision. Les mesures expérimentales (par exemple, la perméabilité à l'oxygène d'un emballage ou le broyage d'une biomasse) réalisées sur différents objets d'études (par exemple, emballage ou procédé de bioraffinerie) sont représentées sous forme de relations n-aires dans une Ressource Termino-Ontologique (RTO). La RTO est modélisée pour représenter les relations n-aires en associant une partie terminologique et/ou linguistique aux ontologies afin d'établir une distinction claire entre la manifestation linguistique (le terme) et la notion qu'elle dénote (le concept). La thèse a pour objectif de proposer une contribution méthodologique d'extraction automatique ou semi-automatique d'arguments de relations n-aires provenant de documents textuels afin de peupler la RTO avec de nouvelles instances. Les méthodologies proposées exploitent et adaptent conjointement des approches de Traitement automatique de la Langue (TAL) et de fouille de données, le tout s'appuyant sur le support sémantique apporté par la RTO de domaine. De manière précise, nous cherchons, dans un premier temps, à extraire des termes, dénotant les concepts d'unités de mesure, réputés difficiles à identifier du fait de leur forte variation typographique dans les textes. Après la localisation de ces derniers par des méthodes de classification automatique, les variants d'unités sont identifiés en utilisant des mesures d'édition originales. La seconde contribution méthodologique de nos travaux repose sur l'adaptation et la combinaison de méthodes de fouille de données (extraction de motifs et règles séquentiels) et d'analyse syntaxique pour identifier les instances d'arguments de la relation n-aire recherchée. / Today, a huge amount of data is made available to the research community through several web-based libraries. Enhancing data collected from scientific documents is a major challenge in order to analyze and reuse efficiently domain knowledge. To be enhanced, data need to be extracted from documents and structured in a common representation using a controlled vocabulary as in ontologies. Our research deals with knowledge engineering issues of experimental data, extracted from scientific articles, in order to reuse them in decision support systems. Experimental data can be represented by n-ary relations which link a studied object (e.g. food packaging, transformation process) with its features (e.g. oxygen permeability in packaging, biomass grinding) and capitalized in an Ontological and Terminological Ressource (OTR). An OTR associates an ontology with a terminological and/or a linguistic part in order to establish a clear distinction between the term and the notion it denotes (the concept). Our work focuses on n-ary relation extraction from scientific documents in order to populate a domain OTR with new instances. Our contributions are based on Natural Language Processing (NLP) together with data mining approaches guided by the domain OTR. More precisely, firstly, we propose to focus on unit of measure extraction which are known to be difficult to identify because of their typographic variations. We propose to rely on automatic classification of texts, using supervised learning methods, to reduce the search space of variants of units, and then, we propose a new similarity measure that identifies them, taking into account their syntactic properties. Secondly, we propose to adapt and combine data mining methods (sequential patterns and rules mining) and syntactic analysis in order to overcome the challenging process of identifying and extracting n-ary relation instances drowned in unstructured texts.
32

Méthode d’extraction d’informations géographiques à des fins d’enrichissement d’une ontologie de domaine / Geographical information extraction method in order to enrich a domain ontology

Nguyen, Van Tien 15 November 2012 (has links)
Notre thèse se situe dans le contexte du projet ANR GEONTO qui porte sur la constitution, l’alignement, la comparaison et l’exploitation d’ontologies géographiques hétérogènes. Dans ce contexte, notre objectif est d'extraire automatiquement des termes topographiques à partir des récits de voyage afin d'enrichir une ontologie géographique initialement conçue par l'IGN. La méthode proposée permet de repérer et d'extraire des termes à connotation topographiques contenus dans un texte. Notre méthode est basée sur le repérage automatique de certaines relations linguistiques afin d'annoter ces termes. Sa mise en œuvre s'appuie sur le principe des relations n-aires et passe par l'utilisation de méthodes ou de techniques de TAL (Traitement Automatique de la Langue). Il s'agit de relations n-aires entre les termes à extraire et d'autres éléments du textes qui peuvent être repérés à l'aide de ressources externes prédéfinies, telles que des lexiques spécifiques: les verbes de récit de voyage (verbes de déplacement, verbes de perceptions, et verbes topographiques), les pré-positions (prépositions de lieu, adverbes, adjectifs), les noms toponymiques, des thésaurus génériques, des ontologies de domaine (ici l'ontologie géographique initialement conçue par l'IGN). Une fois marquées par des patrons linguistiques, les relations proposées nous permettent d'annoter et d'extraire automatiquement des termes dont les différents indices permettent de déduire qu'ils évoquent des concepts topographiques. Les règles de raisonnement qui permettent ces déductions s'appuient sur des connaissances intrinsèques (évocation du spatial dans la langue) et des connaissances externes contenues dans les ressources ci-dessus évoquées, ou leur combinaison. Le point fort de notre approche est que la méthode proposée permet d'extraire non seulement des termes rattachés directement aux noms toponymiques mais également dans des structures de phrase où d'autres termes s'intercalent. L'expérimentation sur un corpus comportant 12 récits de voyage (2419 pages, fournit par la médiathèque de Pau) a montré que notre méthode est robuste. En résultat, elle a permis d'extraire 2173 termes distincts dont 1191 termes valides, soit une précision de 0,55. Cela démontre que l'utilisation des relations proposées est plus efficace que celle des couples (termes, nom toponymique)(qui donne 733 termes distincts valides avec une précision de 0,38). Notre méthode peut également être utilisée pour d'autres applications telles que la reconnaissance des entités nommées géographiques, l'indexation spatiale des documents textuels. / This thesis is in the context of the ANR project GEONTO covering the constitution, alignment, comparison and exploitation of heterogeneous geographic ontologies. The goal is to automatically extract terms from topographic travelogues to enrich a geographical ontology originally designed by IGN. The proposed method allows identification and extraction of terms contained in a text with a topographical connotation. Our method is based on a model that relies on certain grammatical relations to locate these terms. The implementation of this model requires the use of methods or techniques of NLP (Processing of Language). Our model represents the relationships between terms to extract and other elements of the texts that can be identified by using external predefined resources, such as specific lexicons: verbs of travelogue (verbs of displacement, verbs of perceptions, topographical verbs), pre-positions (prepositions of place, adverbs, adjectives), place name, generic thesauri, ontologies of domain (in our case the geographical ontology originally designed by IGN). Once marked by linguistic patterns, the proposed relationships allow us to annotate and automatically retrieve terms. Then various indices help deduce whether the extracted terms evoke topographical concepts. It is through reasoning rules that deductions are made. These rules are based on intrinsic knowledge (evocation of space in the language) and external knowledge contained in external resources mentioned above, or their combination. The advantage of our approach is that the method can extract not only the terms related directly to place name but also those embedded in sentence structure in which other terms coexisted. Experiments on a corpus consisting of 12 travel stories (2419 pages, provided by the library of Pau) showed that our method is robust. As a result, it was used to extract 2173 distinct terms with 1191 valid terms, with a precision of 0.55. This demonstrates that the use of the proposed relationships is more effective than that of couples (term, place name) (which gives 733 distinct terms valid with an accuracy of 0.38). Our method can also be used for other applications such as geographic named entity recognition, spatial indexing of textual documents.
33

Boundary Value Problems For Higher Order Linear Impulsive Differential Equations

Ugur, Omur 01 January 2003 (has links) (PDF)
_I The theory of impulsive di&reg / erential equations has become an important area of research in recent years. Linear equations, meanwhile, are fundamental in most branches of applied mathematics, science, and technology. The theory of higher order linear impulsive equations, however, has not been studied as much as the cor- responding theory of ordinary di&reg / erential equations. In this work, higher order linear impulsive equations at &macr / xed moments of impulses together with certain boundary conditions are investigated by making use of a Green&#039 / s formula, constructed for piecewise di&reg / erentiable functions. Existence and uniqueness of solutions of such boundary value problems are also addressed. Properties of Green&#039 / s functions for higher order impulsive boundary value prob- lems are introduced, showing a striking di&reg / erence when compared to classical bound- ary value problems of ordinary di&reg / erential equations. Necessarily, instead of an or- dinary Green&#039 / s function there corresponds a sequence of Green&#039 / s functions due to impulses. Finally, as a by-product of boundary value problems, eigenvalue problems for higher order linear impulsive di&reg / erential equations are studied. The conditions for the existence of eigenvalues of linear impulsive operators are presented. Basic properties of eigensolutions of self-adjoint operators are also investigated. In particular, a necessary and su&plusmn / cient condition for the self-adjointness of Sturm-Liouville opera- tors is given. The corresponding integral equations for boundary value and eigenvalue problems are also demonstrated in the present work.
34

Low-cost implementation techniques for generic square and cross M-QAM constellations

Fernandes, Diogo 31 August 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-05-17T12:37:21Z No. of bitstreams: 1 diogofernandes.pdf: 2723080 bytes, checksum: 27ac16e618618f1cb4c4dc6394956f80 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-06-28T14:08:15Z (GMT) No. of bitstreams: 1 diogofernandes.pdf: 2723080 bytes, checksum: 27ac16e618618f1cb4c4dc6394956f80 (MD5) / Made available in DSpace on 2016-06-28T14:08:15Z (GMT). No. of bitstreams: 1 diogofernandes.pdf: 2723080 bytes, checksum: 27ac16e618618f1cb4c4dc6394956f80 (MD5) Previous issue date: 2015-08-31 / CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico / Este trabalho tem como objetivo apresentar técnicas com complexidade computacional reduzida para implementação em hardware do modulador de amplitude em quadratura M-ária (M-ary quadrature amplitude modulation - M-QAM) de elevada ordem, que pode ser viável para sistemas banda larga. As técnicas propostas abrangem as constelações M-QAM quadradas e cruzadas (número par e ímpar de bits), a regra de decisão abrupta (hard decison rule), derivação de constelações M-QAM de baixa ordem das de elevada ordem. A análise de desempenho em termos de taxa de bits errados (bit error rate - BER) é realizada quando os símbolos M-QAM são corrompidos por ruído Gaussiano branco aditivo (additive white Gaussian noise - AWGN) e ruído Gaussiano impulsivo aditivo (additive impulsive Gaussian noise - AIGN). Os resultados de desempenho da taxa de bits errados mostram que a perda de desempenho das técnicas propostas é, em média, inferior a 1 dB, o que é um resultado surpreendente. Além disso, a implementação das técnicas propostas em arranjo de portas programáveis em campo (field programmable gate array - FPGA) é descrita e analisada. Os resultados obtidos com as implementações em dispositivo FPGA mostram que as técnicas propostas podem reduzir consideravelmente a utilização de recursos de hardware se comparadas com as técnicas presentes na literatura. Uma melhoria notável em termos de redução da utilização de recursos de hardware é conseguida através da utilização da técnica de modulação M-QAM genérica em comparação com a técnica de regra de decisão heurística (heuristic decision rule - HDR) aprimorada e uma técnica previamente concebida, a tà c cnica HDR. Com base nas análises apresentadas, a técnica HDR aprimorada é menos complexa do que a técnica HDR. Finalmente, os resultados numéricos mostram que a técnica de modulação M-QAM genérica pode ser oito vezes mais rápida do que as outras duas técnicas apresentadas, quando um grande número de símbolos M-QAM (p. ex., > 1000) são transmitidos consecutivamente. / This work aims at introducing techniques with reduced computational complexity for hardware implementation of high order M-ary quadrature amplitude modulation (MQAM) which may be feasible for broadband communication systems. The proposed techniques cover both square and cross M-QAM constellations (even and odd number of bits), hard decision rule, derivation of low-order M-QAM constellations from high order ones. Performance analyses, in terms of bit error rate (BER) is carried out when the M-QAM symbols are corrupted by either additive white Gaussian noise (AWGN) or additive impulsive Gaussian noise (AIGN). The bit error rate performance results show that the performance loss of the proposed techniques is, on average, less than 1 dB, which is a remarkable result. Additionally, the implementation of the proposed techniques in field programmable gate array (FPGA) device is described and outlined. The results based on FPGA show that the proposed techniques can considerably reduce hardware resource utilization. A remarkable improvement in terms of hardware resource utilization reduction is achieved by using the generic M-QAM technique in comparison with the enhanced heuristic decision rule (HDR) technique and a previously designed technique, the HDR technique. Based on the analyses performed, the enhanced HDR technique is less complex than the HDR technique. Finally, the numerical results show that the generic M-QAM technique can be eight times faster than the other two techniques when a large number of M-QAM symbols (e.g., > 1000) are consecutively transmitted.
35

Representando famílias de autômatos celulares por meio de templates

Costa, Maurício Verardo da 10 February 2015 (has links)
Made available in DSpace on 2016-03-15T19:37:54Z (GMT). No. of bitstreams: 1 MAURICIO VERARDO DA COSTA.pdf: 829862 bytes, checksum: 7cb233efb8692b0820e30cf2bdbf4a76 (MD5) Previous issue date: 2015-02-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The notion of a template for representing cellular automata (CA) rules is introduced. This enhances the standard representation based on a rule table, in that it refers to families of cellular automata, instead of a rule alone. Operations applicable to the templates are defined herein, and their use is exemplified in the context of finding representations for rule sets that share properties. Wolfram Mathematica's functional nature and built-in equation-solving capabilities are central to develop these algorithms. The perspectives for using templates in further contexts are also discussed, along with possible extensions to the present work. As a support to the template concept, a Wolfram Mathematica package called CATemplates is presented, shared with the community using a public repository. / A noção de representação de autômatos celulares (ACs) por meio de templates é aqui introduzida. Ela consiste em uma generalização da tabela de transições de estado clássica, permitindo a representação de subespaços de autômatos celulares, ao invés de apenas indivíduos isolados. São definidas operações aplicáveis aos templates, e seu uso é exemplificado por meio da obtenção de algoritmos que encontram subespaços de regras que apresentam propriedades em comum. Para o desenvolvimento destes algoritmos, a utilização do software Wolfram Mathematica é central, dada sua capacidade de resolução automática de sistemas de equações, além da natureza funcional e simbólica da Wolfram Language, linguagem de programação a ele associada. Também são discutidas as vantagens e desvantagens da utilização deste tipo de representação em outros contextos, e possiblidades de extensão para o trabalho. Como apoio ao conceito dos templates, é apresentada a biblioteca para o Wolfram Mathematica chamada CATemplates, disponibilizada em um repositório público.
36

Scalable algorithms for cloud-based Semantic Web data management / Algorithmes passant à l’échelle pour la gestion de données du Web sémantique sur les platformes cloud

Zampetakis, Stamatis 21 September 2015 (has links)
Afin de construire des systèmes intelligents, où les machines sont capables de raisonner exactement comme les humains, les données avec sémantique sont une exigence majeure. Ce besoin a conduit à l’apparition du Web sémantique, qui propose des technologies standards pour représenter et interroger les données avec sémantique. RDF est le modèle répandu destiné à décrire de façon formelle les ressources Web, et SPARQL est le langage de requête qui permet de rechercher, d’ajouter, de modifier ou de supprimer des données RDF. Être capable de stocker et de rechercher des données avec sémantique a engendré le développement des nombreux systèmes de gestion des données RDF.L’évolution rapide du Web sémantique a provoqué le passage de systèmes de gestion des données centralisées à ceux distribués. Les premiers systèmes étaient fondés sur les architectures pair-à-pair et client-serveur, alors que récemment l’attention se porte sur le cloud computing.Les environnements de cloud computing ont fortement impacté la recherche et développement dans les systèmes distribués. Les fournisseurs de cloud offrent des infrastructures distribuées autonomes pouvant être utilisées pour le stockage et le traitement des données. Les principales caractéristiques du cloud computing impliquent l’évolutivité́, la tolérance aux pannes et l’allocation élastique des ressources informatiques et de stockage en fonction des besoins des utilisateurs.Cette thèse étudie la conception et la mise en œuvre d’algorithmes et de systèmes passant à l’échelle pour la gestion des données du Web sémantique sur des platformes cloud. Plus particulièrement, nous étudions la performance et le coût d’exploitation des services de cloud computing pour construire des entrepôts de données du Web sémantique, ainsi que l’optimisation de requêtes SPARQL pour les cadres massivement parallèles.Tout d’abord, nous introduisons les concepts de base concernant le Web sémantique et les principaux composants des systèmes fondés sur le cloud. En outre, nous présentons un aperçu des systèmes de gestion des données RDF (centralisés et distribués), en mettant l’accent sur les concepts critiques de stockage, d’indexation, d’optimisation des requêtes et d’infrastructure.Ensuite, nous présentons AMADA, une architecture de gestion de données RDF utilisant les infrastructures de cloud public. Nous adoptons le modèle de logiciel en tant que service (software as a service - SaaS), où la plateforme réside dans le cloud et des APIs appropriées sont mises à disposition des utilisateurs, afin qu’ils soient capables de stocker et de récupérer des données RDF. Nous explorons diverses stratégies de stockage et d’interrogation, et nous étudions leurs avantages et inconvénients au regard de la performance et du coût monétaire, qui est une nouvelle dimension importante à considérer dans les services de cloud public.Enfin, nous présentons CliqueSquare, un système distribué de gestion des données RDF basé sur Hadoop. CliqueSquare intègre un nouvel algorithme d’optimisation qui est capable de produire des plans massivement parallèles pour des requêtes SPARQL. Nous présentons une famille d’algorithmes d’optimisation, s’appuyant sur les équijointures n- aires pour générer des plans plats, et nous comparons leur capacité à trouver les plans les plus plats possibles. Inspirés par des techniques de partitionnement et d’indexation existantes, nous présentons une stratégie de stockage générique appropriée au stockage de données RDF dans HDFS (Hadoop Distributed File System). Nos résultats expérimentaux valident l’effectivité et l’efficacité de l’algorithme d’optimisation démontrant également la performance globale du système. / In order to build smart systems, where machines are able to reason exactly like humans, data with semantics is a major requirement. This need led to the advent of the Semantic Web, proposing standard ways for representing and querying data with semantics. RDF is the prevalent data model used to describe web resources, and SPARQL is the query language that allows expressing queries over RDF data. Being able to store and query data with semantics triggered the development of many RDF data management systems. The rapid evolution of the Semantic Web provoked the shift from centralized data management systems to distributed ones. The first systems to appear relied on P2P and client-server architectures, while recently the focus moved to cloud computing.Cloud computing environments have strongly impacted research and development in distributed software platforms. Cloud providers offer distributed, shared-nothing infrastructures that may be used for data storage and processing. The main features of cloud computing involve scalability, fault-tolerance, and elastic allocation of computing and storage resources following the needs of the users.This thesis investigates the design and implementation of scalable algorithms and systems for cloud-based Semantic Web data management. In particular, we study the performance and cost of exploiting commercial cloud infrastructures to build Semantic Web data repositories, and the optimization of SPARQL queries for massively parallel frameworks.First, we introduce the basic concepts around Semantic Web and the main components and frameworks interacting in massively parallel cloud-based systems. In addition, we provide an extended overview of existing RDF data management systems in the centralized and distributed settings, emphasizing on the critical concepts of storage, indexing, query optimization, and infrastructure. Second, we present AMADA, an architecture for RDF data management using public cloud infrastructures. We follow the Software as a Service (SaaS) model, where the complete platform is running in the cloud and appropriate APIs are provided to the end-users for storing and retrieving RDF data. We explore various storage and querying strategies revealing pros and cons with respect to performance and also to monetary cost, which is a important new dimension to consider in public cloud services. Finally, we present CliqueSquare, a distributed RDF data management system built on top of Hadoop, incorporating a novel optimization algorithm that is able to produce massively parallel plans for SPARQL queries. We present a family of optimization algorithms, relying on n-ary (star) equality joins to build flat plans, and compare their ability to find the flattest possibles. Inspired by existing partitioning and indexing techniques we present a generic storage strategy suitable for storing RDF data in HDFS (Hadoop’s Distributed File System). Our experimental results validate the efficiency and effectiveness of the optimization algorithm demonstrating also the overall performance of the system.
37

Méthodes hybrides parallèles pour la résolution de problèmes d'optimisation combinatoire : application au clustering sous contraintes / Parallel hybrid methods for solving combinatorial optimization problems : application to clustering under constraints

Ouali, Abdelkader 03 July 2017 (has links)
Les problèmes d’optimisation combinatoire sont devenus la cible de nombreuses recherches scientifiques pour leur importance dans la résolution de problèmes académiques et de problèmes réels rencontrés dans le domaine de l’ingénierie et dans l’industrie. La résolution de ces problèmes par des méthodes exactes ne peut être envisagée à cause des délais de traitement souvent exorbitants que nécessiteraient ces méthodes pour atteindre la (les) solution(s) optimale(s). Dans cette thèse, nous nous sommes intéressés au contexte algorithmique de résolution des problèmes combinatoires, et au contexte de modélisation de ces problèmes. Au niveau algorithmique, nous avons appréhendé les méthodes hybrides qui excellent par leur capacité à faire coopérer les méthodes exactes et les méthodes approchées afin de produire rapidement des solutions. Au niveau modélisation, nous avons travaillé sur la spécification et la résolution exacte des problématiques complexes de fouille des ensembles de motifs en étudiant tout particulièrement le passage à l’échelle sur des bases de données de grande taille. D'une part, nous avons proposé une première parallélisation de l'algorithme DGVNS, appelée CPDGVNS, qui explore en parallèle les différents clusters fournis par la décomposition arborescente en partageant la meilleure solution trouvée sur un modèle maître-travailleur. Deux autres stratégies, appelées RADGVNS et RSDGVNS, ont été proposées qui améliorent la fréquence d'échange des solutions intermédiaires entre les différents processus. Les expérimentations effectuées sur des problèmes combinatoires difficiles montrent l'adéquation et l'efficacité de nos méthodes parallèles. D'autre part, nous avons proposé une approche hybride combinant à la fois les techniques de programmation linéaire en nombres entiers (PLNE) et la fouille de motifs. Notre approche est complète et tire profit du cadre général de la PLNE (en procurant un haut niveau de flexibilité et d’expressivité) et des heuristiques spécialisées pour l’exploration et l’extraction de données (pour améliorer les temps de calcul). Outre le cadre général de l’extraction des ensembles de motifs, nous avons étudié plus particulièrement deux problèmes : le clustering conceptuel et le problème de tuilage (tiling). Les expérimentations menées ont montré l’apport de notre proposition par rapport aux approches à base de contraintes et aux heuristiques spécialisées. / Combinatorial optimization problems have become the target of many scientific researches for their importance in solving academic problems and real problems encountered in the field of engineering and industry. Solving these problems by exact methods is often intractable because of the exorbitant time processing that these methods would require to reach the optimal solution(s). In this thesis, we were interested in the algorithmic context of solving combinatorial problems, and the modeling context of these problems. At the algorithmic level, we have explored the hybrid methods which excel in their ability to cooperate exact methods and approximate methods in order to produce rapidly solutions of best quality. At the modeling level, we worked on the specification and the exact resolution of complex problems in pattern set mining, in particular, by studying scaling issues in large databases. On the one hand, we proposed a first parallelization of the DGVNS algorithm, called CPDGVNS, which explores in parallel the different clusters of the tree decomposition by sharing the best overall solution on a master-worker model. Two other strategies, called RADGVNS and RSDGVNS, have been proposed which improve the frequency of exchanging intermediate solutions between the different processes. Experiments carried out on difficult combinatorial problems show the effectiveness of our parallel methods. On the other hand, we proposed a hybrid approach combining techniques of both Integer Linear Programming (ILP) and pattern mining. Our approach is comprehensive and takes advantage of the general ILP framework (by providing a high level of flexibility and expressiveness) and specialized heuristics for data mining (to improve computing time). In addition to the general framework for the pattern set mining, two problems were studied: conceptual clustering and the tiling problem. The experiments carried out showed the contribution of our proposition in relation to constraint-based approaches and specialized heuristics.
38

Shift gray codes

Williams, Aaron Michael 11 December 2009 (has links)
Combinatorial objects can be represented by strings, such as 21534 for the permutation (1 2) (3 5 4), or 110100 for the binary tree corresponding to the balanced parentheses (()()). Given a string s = s1 s2 sn, the right-shift operation shift(s, i, j) replaces the substring si si+1..sj by si+1..sj si. In other words, si is right-shifted into position j by applying the permutation (j j−1 .. i) to the indices of s. Right-shifts include prefix-shifts (i = 1) and adjacent-transpositions (j = i+1). A fixed-content language is a set of strings that contain the same multiset of symbols. Given a fixed-content language, a shift Gray code is a list of its strings where consecutive strings differ by a shift. This thesis asks if shift Gray codes exist for a variety of combinatorial objects. This abstract question leads to a number of practical answers. The first prefix-shift Gray code for multiset permutations is discovered, and it provides the first algorithm for generating multiset permutations in O(1)-time while using O(1) additional variables. Applications of these results include more efficient exhaustive solutions to stacker-crane problems, which are natural NP-complete traveling salesman variants. This thesis also produces the fastest algorithm for generating balanced parentheses in an array, and the first minimal-change order for fixed-content necklaces and Lyndon words. These results are consequences of the following theorem: Every bubble language has a right-shift Gray code. Bubble languages are fixed-content languages that are closed under certain adjacent-transpositions. These languages generalize classic combinatorial objects: k-ary trees, ordered trees with fixed branching sequences, unit interval graphs, restricted Schr oder and Motzkin paths, linear-extensions of B-posets, and their unions, intersections, and quotients. Each Gray code is circular and is obtained from a new variation of lexicographic order known as cool-lex order. Gray codes using only shift(s, 1, n) and shift(s, 1, n−1) are also found for multiset permutations. A universal cycle that omits the last (redundant) symbol from each permutation is obtained by recording the first symbol of each permutation in this Gray code. As a special case, these shorthand universal cycles provide a new fixed-density analogue to de Bruijn cycles, and the first universal cycle for the "middle levels" (binary strings of length 2k + 1 with sum k or k + 1).

Page generated in 0.0581 seconds