• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 2
  • 1
  • Tagged with
  • 47
  • 47
  • 47
  • 12
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Capturing Changes in Combinatorial Dynamical Systems via Persistent Homology

Ryan Slechta (12427508) 20 April 2022 (has links)
<p>Recent innovations in combinatorial dynamical systems permit them to be studied with algorithmic methods. One such method from topological data analysis, called persistent homology, allows one to summarize the changing homology of a sequence of simplicial complexes. This dissertation explicates three methods for capturing and summarizing changes in combinatorial dynamical systems through the lens of persistent homology. The first places the Conley index in the persistent homology setting. This permits one to capture the persistence of salient features of a combinatorial dynamical system. The second shows how to capture changes in combinatorial dynamical systems at different resolutions by computing the persistence of the Conley-Morse graph. Finally, the third places Conley's notion of continuation in the combinatorial setting and permits the tracking of isolated invariant sets across a sequence of combinatorial dynamical systems. </p>
32

Computing component specifications from global system requirements / Beräkning av komponentspecifikationer från globala systemkrav

Björkman, Carl January 2017 (has links)
If we have a program with strict control flow security requirements and want to ensure system requirements by verifying properties of said program, but part of the code base is in the form of a plug-in or third party library which we do not have access to at the time of verification, the procedure presented in this thesis can be used to generate the requirements needed for the plug-ins or third party libraries that they would have to fulfil in order for the final product to pass the given system requirements. This thesis builds upon a transformation procedure that turns control flow properties of a behavioural form into a structural form. The control flow properties focus purely on control flow in the sense that they abstract away any kind of program data and target only call and return events. By behavioural properties we refer to properties regarding execution behaviour and by structural properties to properties regarding sequences of instructions in the source code or object code. The result presented in this thesis takes this transformation procedure one step further and assume that some methods (or functions or procedures, depending on the programming language) are given in the form of models called flow graph, while the remaining methods are left unspecified. The output then becomes a set of structural constraints for the unspecified methods, which they must adhere to in order for any completion of the partial flow graph to satisfy the behavioural formula. / Om vi har ett program med strikta kontrollflödeskrav och vill garantera att vissa systemkrav uppfylls genom att verifiera formella egenskaper av detta program, samtidigt som en del av kodbasen är i form av ett plug-in eller tredjeparts-bibliotek som vi inte har tillgång till vid verifieringen, så kan proceduren som presenteras i detta examensarbete användas för att generera de systemkrav som de plug-in eller tredjeparts-bibliotek behöver uppfylla för att slutprodukten ska passera de givna systemkraven. Detta examensarbete bygger på en transformationsprocedur som omvandlar kontrollflödesegenskaper på en beteendemässig form till en strukturell form. Kontrollflödes-egenskaperna fokuserar uteslutande på kontrollflöden i den meningen att de abstraherar bort all form av programdata och berör enbart anrop- och retur-händelser. Med beteendemässiga egenskaper syftar vi på egenskaper som berör exekverings-beteende och med strukturella egenskaper syftar vi på egenskaper som berör ordningen på instruktionerna i källkoden eller objektkoden. Resultatet i detta examensarbete tar denna transformationsprocedur ett steg längre och antar att vissa metoder (eller funktioner eller procedurer beroende på programmeringsspråk) är redan givna i formen av modeller som kallas flödesgrafer, medan resten av metoderna fortfarande är ospecificerade. Utdata blir då en mängd av strukturella restriktioner för de ospecificerade metoderna, som de måste följa för att en fulländning av den partiella flödesgrafen ska satisfiera den beteendemässiga formeln.
33

A Generalization of the Iteration Theorem for Recognizable Formal Power Series on Trees

Kramer, Patrick 15 November 2023 (has links)
Berstel and Reutenauer stated the iteration theorem for recognizable formal power series on trees over fields and vector spaces. The key idea of its proof is the existence of pseudo-regular matrices in matrix-products. This theorem is generalized to integral domains and modules over integral domains in this thesis. It only requires the reader to have basic knowledge in linear algebra. Concepts from the advanced linear algebra and abstract algebra are introduced in the preliminary chapter.:1. Introduction 2. Preliminaries 3. Long products of matrices 4. Formal power series on trees 5. The generalized iteration theorem 6. Conclusion
34

Short Proofs May Be Spacious : Understanding Space in Resolution

Nordström, Jakob January 2008 (has links)
Om man ser på de bästa nu kända algoritmerna för att avgöra satisfierbarhet hos logiska formler så är de allra flesta baserade på den så kallade DPLL-metoden utökad med klausulinlärning. De två viktigaste gränssättande faktorerna för sådana algoritmer är hur mycket tid och minne de använder, och att förstå sig på detta är därför en fråga som har stor praktisk betydelse. Inom området beviskomplexitet svarar tids- och minnesåtgång mot längd och minne hos resolutionsbevis för formler i konjunktiv normalform (CNF-formler). En lång rad arbeten har studerat dessa mått och även jämfört dem med bredden av bevis, ett annat mått som visat sig höra nära samman med både längd och minne. Mer formellt är längden hos ett bevis antalet rader, dvs. klausuler, bredden är storleken av den största klausulen, och minnet är maximala antalet klausuler som man behöver komma ihåg samtidigt om man under bevisets gång bara får dra nya slutsatser från klausuler som finns sparade. För längd och bredd har man lyckats visa en rad starka resultat men förståelsen av måttet minne har lämnat mycket i övrigt att önska. Till exempel så är det känt att minnet som behövs för att bevisa en formel är minst lika stort som den nödvändiga bredden, men det har varit en öppen fråga om minne och bredd kan separeras eller om de två måtten mäter "samma sak" i den meningen att de alltid är asymptotiskt lika stora för en formel. Det har också varit okänt om det faktum att det finns ett kort bevis för en formel medför att formeln också kan bevisas i litet minne (motsvarande påstående är sant för längd jämfört med bredd) eller om det tvärtom kan vara så att längd och minne är "helt orelaterade" på så sätt att även korta bevis kan kräva maximal mängd minne. I denna avhandling presenterar vi först ett förenklat bevis av trade-off-resultatet för längd jämfört med minne i (Hertel och Pitassi 2007) och visar hur samma idéer kan användas för att visa ett par andra exponentiella avvägningar i relationerna mellan olika beviskomplexitetsmått för resolution. Sedan visar vi att det finns formler som kan bevisas i linjär längd och konstant bredd men som kräver en mängd minne som växer logaritmiskt i formelstorleken, vilket vi senare förbättrar till kvadratroten av formelstorleken. Dessa resultat separerar således minne och bredd. Genom att använda andra men besläktade idéer besvarar vi därefter frågan om hur minne och längd förhåller sig till varandra genom att separera dem på starkast möjliga sätt. Mer precist visar vi att det finns CNF-formler av storlek O(n) som har resolutionbevis i längd O(n) och bredd O(1) men som kräver minne minst Omega(n/log n). Det gemensamma temat för dessa resultat är att vi studerar formler som beskriver stenläggningsspel, eller pebblingspel, på riktade acykliska grafer. Vi bevisar undre gränser för det minne som behövs för den så kallade pebblingformeln över en graf uttryckt i det svart-vita pebblingpriset för grafen i fråga. Slutligen observerar vi att vår optimala separation av minne och längd i själva verket är ett specialfall av en mer generell sats. Låt F vara en CNF-formel och f:{0,1}^d-&gt;{0,1} en boolesk funktion. Ersätt varje variabel x i F med f(x_1, ..., x_d) och skriv om denna nya formel på naturligt sätt som en CNF-formel F[f]. Då gäller, givet att F och f har rätt egenskaper, att F[f] kan bevisas i resolution i väsentligen samma längd och bredd som F, men att den minimala mängd minne som behövs för F[f] är åtminstone lika stor som det minimala antalet variabler som måste förekomma samtidigt i ett bevis för F. / Most state-of-the-art satisfiability algorithms today are variants of the DPLL procedure augmented with clause learning. The two main bottlenecks for such algorithms are the amounts of time and memory used. Thus, understanding time and memory requirements for clause learning algorithms, and how these requirements are related to one another, is a question of considerable practical importance. In the field of proof complexity, these resources correspond to the length and space of resolution proofs for formulas in conjunctive normal form (CNF). There has been a long line of research investigating these proof complexity measures and relating them to the width of proofs, another measure which has turned out to be intimately connected with both length and space. Formally, the length of a resolution proof is the number of lines, i.e., clauses, the width of a proof is the maximal size of any clause in it, and the space is the maximal number of clauses kept in memory simultaneously if the proof is only allowed to infer new clauses from clauses currently in memory. While strong results have been established for length and width, our understanding of space has been quite poor. For instance, the space required to prove a formula is known to be at least as large as the needed width, but it has remained open whether space can be separated from width or whether the two measures coincide asymptotically. It has also been unknown whether the fact that a formula is provable in short length implies that it is also provable in small space (which is the case for length versus width), or whether on the contrary these measures are "completely unrelated" in the sense that short proofs can be maximally complex with respect to space. In this thesis, as an easy first observation we present a simplified proof of the recent length-space trade-off result for resolution in (Hertel and Pitassi 2007) and show how our ideas can be used to prove a couple of other exponential trade-offs in resolution. Next, we prove that there are families of CNF formulas that can be proven in linear length and constant width but require space growing logarithmically in the formula size, later improving this exponentially to the square root of the size. These results thus separate space and width. Using a related but different approach, we then resolve the question about the relation between space and length by proving an optimal separation between them. More precisely, we show that there are families of CNF formulas of size O(n) that have resolution proofs of length O(n) and width O(1) but for which any proof requires space Omega(n/log n). All of these results are achieved by studying so-called pebbling formulas defined in terms of pebble games over directed acyclic graphs (DAGs) and proving lower bounds on the space requirements for such formulas in terms of the black-white pebbling price of the underlying DAGs. Finally, we observe that our optimal separation of space and length is in fact a special case of a more general phenomenon. Namely, for any CNF formula F and any Boolean function f:{0,1}^d-&gt;{0,1}, replace every variable x in F by f(x_1, ..., x_d) and rewrite this new formula in CNF in the natural way, denoting the resulting formula F[f]. Then if F and f have the right properties, F[f] can be proven in resolution in essentially the same length and width as F but the minimal space needed for F[f] is lower-bounded by the number of variables that have to be mentioned simultaneously in any proof for F. / QC 20100831
35

Efficient and Secure Equality-based Two-party Computation

Javad Darivandpour (11190051) 27 July 2021 (has links)
<div>Multiparty computation refers to a scenario in which multiple distinct yet connected parties aim to jointly compute a functionality. Over recent decades, with the rapid spread of the internet and digital technologies, multiparty computation has become an increasingly important topic. In addition to the integrity of computation in such scenarios, it is essential to ensure that the privacy of sensitive information is not violated. Thus, secure multiparty computation aims to provide sound approaches for the joint computation of desired functionalities in a secure manner: Not only must the integrity of computation be guaranteed, but also each party must not learn anything about the other parties' private data. In other words, each party learns no more than what can be inferred from its own input and its prescribed output.</div><div><br></div><div> This thesis considers secure two-party computation over arithmetic circuits based on additive secret sharing. In particular, we focus on efficient and secure solutions for fundamental functionalities that depend on the equality of private comparands. The first direction we take is providing efficient protocols for two major problems of interest. Specifically, we give novel and efficient solutions for <i>private equality testing</i> and multiple variants of <i>secure wildcard pattern matching</i> over any arbitrary finite alphabet. These problems are of vital importance: Private equality testing is a basic building block in many secure multiparty protocols; and, secure pattern matching is frequently used in various data-sensitive domains, including (but not limited to) private information retrieval and healthcare-related data analysis. The second direction we take towards a performance improvement in equality-based secure two-party computation is via introducing a generic functionality-independent secure preprocessing that results in an overall computation and communication cost reduction for any subsequent protocol. We achieve this by providing the first precise functionality formulation and secure protocols for replacing original inputs with much smaller inputs such that this replacement neither changes the outcome of subsequent computations nor violates the privacy of sensitive inputs. Moreover, our input-size reduction opens the door to a new approach for efficiently solving Private Set Intersection. The protocols we give in this thesis are typically secure in the semi-honest adversarial threat model.</div>
36

Increasing Policy Network Size Does Not Guarantee Better Performance in Deep Reinforcement Learning

Zachery Peter Berg (12455928) 25 April 2022 (has links)
<p>The capacity of deep reinforcement learning policy networks has been found to affect the performance of trained agents. It has been observed that policy networks with more parameters have better training performance and generalization ability than smaller networks. In this work, we find cases where this does not hold true. We observe unimodal variance in the zero-shot test return of varying width policies, which accompanies a drop in both train and test return. Empirically, we demonstrate mostly monotonically increasing performance or mostly optimal performance as the width of deep policy networks increase, except near the variance mode. Finally, we find a scenario where larger networks have increasing performance up to a point, then decreasing performance. We hypothesize that these observations align with the theory of double descent in supervised learning, although with specific differences.</p>
37

Classification of P-oligomorphic groups, conjectures of Cameron and Macpherson / Classification des groupes P-oligomorphes, conjectures de Cameron et Macpherson

Falque, Justine 29 November 2019 (has links)
Les travaux présentés dans cette thèse de doctorat relèvent de la combinatoire algébrique et de la théorie des groupes. Précisément, ils apportent une contribution au domaine de recherche qui étudie le comportement des profils des groupes oligomorphes.La première partie de ce manuscrit introduit la plupart des outils qui nous seront nécessaires, à commencer par des éléments de combinatoire et combinatoire algébrique.Nous présentons les fonctions de comptage à travers quelques exemples classiques, et nous motivons l'addition d'une structure d'algèbre graduée sur les objets énumérés dans le but d'étudier ces fonctions.Nous évoquons aussi les notions d'ordre et de treillis.Dans un second temps, nous donnons un aperçu des définitions et propriétés de base associées aux groupes de permutations, ainsi que quelques résultats de théorie des invariants. Nous terminons cette partie par une description de la méthode d'énumération de Pólya, qui permet de compter des objets sous une action de groupe.La deuxième partie est consacrée à l'introduction du domaine dans lequel s'inscrit cette thèse, celui de l'étude des profils de structures relationnelles, et en particulier des profils orbitaux. Si G est un groupe de permutations infini, son profil est la fonction de comptage qui envoie chaque entier n > 0 sur le nombre d'orbites de n-sous-ensembles, pour l'action induite de G sur les sous-ensembles finis d'éléments.Cameron a conjecturé que le profil de G est équivalent à un polynôme dès lors qu'il est borné par un polynôme. Une autre conjecture, plus forte, a été plus tard émise par Macpherson : elle implique une certaine structure d'algèbre graduée sur les orbites de sous-ensembles, créée par Cameron et baptisée algèbre des orbites, soutenant que si le profil est borné par un polynôme, alors l'algèbre des orbites est de type fini.Comme amorce de notre étude de ce problème, nous développons quelques exemples et faisons nos premiers pas vers une résolution en examinant les systèmes de blocs des groupes de profil borné par un polynôme --- que nous appelons P-oligomorphes ---,ainsi que la notion de sous-produit direct.La troisième partie démontre une classification des groupes P-oligomorphes, notre résultat le plus important et dont la conjecture de Macpherson se révèle un corollaire.Tout d'abord, nous étudions la combinatoire du treillis des systèmes de blocs,qui conduit à l'identification d'un système généralisé particulier, constituébde blocs ayant de bonnes propriétés. Nous abordons ensuite le cas particulier o`u il se limite à un seul bloc de blocs, pour lequel nous établissons une classification. La preuve emprunte à la notion de sous-produit direct pour gérer les synchronisations internes au groupe, et a requis une part d'exploration informatique afin d'être d'abord conjecturée.Dans le cas général, nous nous appuyons sur les résultats précédents et mettons en évidence la structure de G comme produit semi-direct impliquant son sous-groupe normal d'indice fini minimal et un groupe fini. Ceci permet de formaliser une classification complète des groupes P-oligomorphes,et d'en déduire la forme de l'algèbre des orbites : (à peu de choses près) une algèbre d'invariants explicite d'un groupe fini. Les conjectures de Macpherson et de Cameron en découlent, et plus généralement une compréhension exhaustive de ces groupes.L'annexe contient des extraits du code utilisé pour mener la preuve à bien,ainsi qu'un aperçu de celui qui a été produit en s'appuyant sur la nouvelle classification, qui permet de manipuler les groupes P-oligomorphes en usant d'une algorithmique adaptée. Enfin, nous joignons ici notre première preuve, plus faible, des deux conjectures. / This PhD thesis falls under the fields of algebraic combinatorics and group theory. Precisely,it brings a contribution to the domain that studies profiles of oligomorphic permutation groups and their behaviors.The first part of this manuscript introduces most of the tools that will be needed later on, starting with elements of combinatorics and algebraic combinatorics.We define counting functions through classical examples ; with a view of studying them, we argue the relevance of adding a graded algebra structure on the counted objects.We also bring up the notions of order and lattice.Then, we provide an overview of the basic definitions and properties related to permutation groups and to invariant theory. We end this part with a description of the Pólya enumeration method, which allows to count objects under a group action.The second part is dedicated to introducing the domain this thesis comes withinthe scope of. It dwells on profiles of relational structures,and more specifically orbital profiles.If G is an infinite permutation group, its profile is the counting function which maps any n > 0 to the number of orbits of n-subsets, for the inducedaction of G on the finite subsets of elements.Cameron conjectured that the profile of G is asymptotically equivalent to a polynomial whenever it is bounded by apolynomial.Another, stronger conjecture was later made by Macpherson : it involves a certain structure of graded algebra on the orbits of subsetscreated by Cameron, the orbit algebra, and states that if the profile of G is bounded by a polynomial, then its orbit algebra is finitely generated.As a start in our study of this problem, we develop some examples and get our first hints towards a resolution by examining the block systems ofgroups with profile bounded by a polynomial --- that we call P-oligomorphic ---, as well as the notion of subdirect product.The third part is the proof of a classification of P-oligomorphic groups,with Macpherson's conjecture as a corollary.First, we study the combinatorics of the lattice of block systems,which leads to identifying one special, generalized such system, that consists of blocks of blocks with good properties.We then tackle the elementary case when there is only one such block of blocks, for which we establish a classification. The proof borrows to the subdirect product concept to handle synchronizations within the group, and relied on an experimental approach on computer to first conjecture the classification.In the general case, we evidence the structure of a semi-direct product involving the minimal normal subgroup of finite index and some finite group.This allows to formalize a classification of all P-oligomorphic groups, the main result of this thesis, and to deduce the form of the orbit algebra: (little more than) an explicit algebra of invariants of a finite group. This implies the conjectures of Macpherson and Cameron, and a deep understanding of these groups.The appendix provides parts of the code that was used, and a glimpse at that resulting from the classification afterwards,that allows to manipulate P-oligomorphic groups by apropriate algorithmics. Last, we include our earlier (weaker) proof of the conjectures.
38

New Spatio-temporal Hawkes Process Models For Social Good

Wen-Hao Chiang (12476658) 28 April 2022 (has links)
<p>As more and more datasets with self-exciting properties become available, the demand for robust models that capture contagion across events is also getting stronger. Hawkes processes stand out given their ability to capture a wide range of contagion and self-excitation patterns, including the transmission of infectious disease, earthquake aftershock distributions, near-repeat crime patterns, and overdose clusters. The Hawkes process is flexible in modeling these various applications through parametric and non-parametric kernels that model event dependencies in space, time and on networks.</p> <p>In this thesis, we develop new frameworks that integrate Hawkes Process models with multi-armed bandit algorithms, high dimensional marks, and high-dimensional auxiliary data to solve problems in search and rescue, forecasting infectious disease, and early detection of overdose spikes.</p> <p>In Chapter 3, we develop a method applications to the crisis of increasing overdose mortality over the last decade.  We first encode the molecular substructures found in a drug overdose toxicology report. We then cluster these overdose encodings into different overdose categories and model these categories with spatio-temporal multivariate Hawkes processes. Our results demonstrate that the proposed methodology can improve estimation of the magnitude of an overdose spike based on the substances found in an initial overdose. </p> <p>In Chapter 4, we build a framework for multi-armed bandit problems arising in event detection where the underlying process is self-exciting. We derive the expected number of events for Hawkes processes given a parametric model for the intensity and then analyze the regret bound of a Hawkes process UCB-normal algorithm. By introducing the Hawkes Processes modeling into the upper confidence bound construction, our models can detect more events of interest under the multi-armed bandit problem setting. We apply the Hawkes bandit model to spatio-temporal data on crime events and earthquake aftershocks. We show that the model can quickly learn to detect hotspot regions, when events are unobserved, while striking a balance between exploitation and exploration. </p> <p>In Chapter 5, we present a new spatio-temporal framework for integrating Hawkes processes with multi-armed bandit algorithms. Compared to the methods proposed in Chapter 4, the upper confidence bound is constructed through Bayesian estimation of a spatial Hawkes process to balance the trade-off between exploiting and exploring geographic regions. The model is validated through simulated datasets and real-world datasets such as flooding events and improvised explosive devices (IEDs) attack records. The experimental results show that our model outperforms baseline spatial MAB algorithms through rewards and ranking metrics.</p> <p>In Chapter 6, we demonstrate that the Hawkes process is a powerful tool to model the infectious disease transmission. We develop models using Hawkes processes with spatial-temporal covariates to forecast COVID-19 transmission at the county level. In the proposed framework, we show how to estimate the dynamic reproduction number of the virus within an EM algorithm through a regression on Google mobility indices. We also include demographic covariates as spatial information to enhance the accuracy. Such an approach is tested on both short-term and long-term forecasting tasks. The results show that the Hawkes process outperforms several benchmark models published in a public forecast repository. The model also provides insights on important covariates and mobility that impact COVID-19 transmission in the U.S.</p> <p>Finally, in chapter 7, we discuss implications of the research and future research directions.</p>
39

Building the Intelligent IoT-Edge: Balancing Security and Functionality using Deep Reinforcement Learning

Anand A Mudgerikar (11791094) 19 December 2021 (has links)
<div>The exponential growth of Internet of Things (IoT) and cyber-physical systems is resulting in complex environments comprising of various devices interacting with each other and with users. In addition, the rapid advances in Artificial Intelligence are making those devices able to autonomously modify their behaviors through the use of techniques such as reinforcement learning (RL). There is thus the need for an intelligent monitoring system on the network edge with a global view of the environment to autonomously predict optimal device actions. However, it is clear however that ensuring safety and security in such environments is critical. To this effect, we develop a constrained RL framework for IoT environments that determines optimal devices actions with respect to user-defined goals or required functionalities using deep Q learning. We use anomaly based intrusion detection on the network edge to dynamically generate security and safety policies to constrain the RL agent in the framework. We analyze the balance required between ‘safety/security’ and ‘functionality’ in IoT environments by manipulating the exploration of safe and unsafe benefit state spaces in the RL framework. We instantiate the framework for testing on application layer control in smart home environments, and network layer control including network functionalities like rate control and routing, for SDN based environments.</div>
40

Representation of Monoids and Lattice Structures in the Combinatorics of Weyl Groups / Représentations de monoïdes et structures de treillis en combinatoire des groupes de Weyl.

Gay, Joël 25 June 2018 (has links)
La combinatoire algébrique est le champ de recherche qui utilise des méthodes combinatoires et des algorithmes pour étudier les problèmes algébriques, et applique ensuite des outils algébriques à ces problèmes combinatoires. L’un des thèmes centraux de la combinatoire algébrique est l’étude des permutations car elles peuvent être interprétées de bien des manières (en tant que bijections, matrices de permutations, mais aussi mots sur des entiers, ordre totaux sur des entiers, sommets du permutaèdre…). Cette riche diversité de perspectives conduit alors aux généralisations suivantes du groupe symétrique. Sur le plan géométrique, le groupe symétrique engendré par les transpositions élémentaires est l’exemple canonique des groupes de réflexions finis, également appelés groupes de Coxeter. Sur le plan monoïdal, ces même transpositions élémentaires deviennent les opérateurs du tri par bulles et engendrent le monoïde de 0-Hecke, dont l’algèbre est la spécialisation à q=0 de la q-déformation du groupe symétrique introduite par Iwahori. Cette thèse se consacre à deux autres généralisations des permutations. Dans la première partie de cette thèse, nous nous concentrons sur les matrices de permutations partielles, en d’autres termes les placements de tours ne s’attaquant pas deux à deux sur un échiquier carré. Ces placements de tours engendrent le monoïde de placements de tours, une généralisation du groupe symétrique. Dans cette thèse nous introduisons et étudions le 0-monoïde de placements de tours comme une généralisation du monoïde de 0-Hecke. Son algèbre est la dégénérescence à q=0 de la q-déformation du monoïde de placements de tours introduite par Solomon. On étudie par la suite les propriétés monoïdales fondamentales du 0-monoïde de placements de tours (ordres de Green, propriété de treillis du R-ordre, J-trivialité) ce qui nous permet de décrire sa théorie des représentations (modules simples et projectifs, projectivité sur le monoïde de 0-Hecke, restriction et induction le long d’une fonction d’inclusion).Les monoïdes de placements de tours sont en fait l’instance en type A de la famille des monoïdes de Renner, définis comme les complétés des groupes de Weyl (c’est-à-dire les groupes de Coxeter cristallographiques) pour la topologie de Zariski. Dès lors, dans la seconde partie de la thèse nous étendons nos résultats du type A afin de définir les monoïdes de 0-Renner en type B et D et d’en donner une présentation. Ceci nous conduit également à une présentation des monoïdes de Renner en type B et D, corrigeant ainsi une présentation erronée se trouvant dans la littérature depuis une dizaine d’années. Par la suite, nous étudions comme en type A les propriétés monoïdales de ces nouveaux monoïdes de 0-Renner de type B et D : ils restent J-triviaux, mais leur R-ordre n’est plus un treillis. Cela ne nous empêche pas d’étudier leur théorie des représentations, ainsi que la restriction des modules projectifs sur le monoïde de 0-Hecke qui leur est associé. Enfin, la dernière partie de la thèse traite de différentes généralisations des permutations. Dans une récente séries d’articles, Châtel, Pilaud et Pons revisitent la combinatoire algébrique des permutations (ordre faible, algèbre de Hopf de Malvenuto-Reutenauer) en terme de combinatoire sur les ordres partiels sur les entiers. Cette perspective englobe également la combinatoire des quotients de l’ordre faible tels les arbres binaires, les séquences binaires, et de façon plus générale les récents permutarbres de Pilaud et Pons. Nous généralisons alors l’ordre faibles aux éléments des groupes de Weyl. Ceci nous conduit à décrire un ordre sur les sommets des permutaèdres, associaèdres généralisés et cubes dans le même cadre unifié. Ces résultats se basent sur de subtiles propriétés des sommes de racines dans les groupes de Weyl qui s’avèrent ne pas fonctionner pour les groupes de Coxeter qui ne sont pas cristallographiques / Algebraic combinatorics is the research field that uses combinatorial methods and algorithms to study algebraic computation, and applies algebraic tools to combinatorial problems. One of the central topics of algebraic combinatorics is the study of permutations, interpreted in many different ways (as bijections, permutation matrices, words over integers, total orders on integers, vertices of the permutahedron…). This rich diversity of perspectives leads to the following generalizations of the symmetric group. On the geometric side, the symmetric group generated by simple transpositions is the canonical example of finite reflection groups, also called Coxeter groups. On the monoidal side, the simple transpositions become bubble sort operators that generate the 0-Hecke monoid, whose algebra is the specialization at q=0 of Iwahori’s q-deformation of the symmetric group. This thesis deals with two further generalizations of permutations. In the first part of this thesis, we first focus on partial permutations matrices, that is placements of pairwise non attacking rooks on a n by n chessboard, simply called rooks. Rooks generate the rook monoid, a generalization of the symmetric group. In this thesis we introduce and study the 0-Rook monoid, a generalization of the 0-Hecke monoid. Its algebra is a proper degeneracy at q = 0 of the q-deformed rook monoid of Solomon. We study fundamental monoidal properties of the 0-rook monoid (Green orders, lattice property of the R-order, J-triviality) which allow us to describe its representation theory (simple and projective modules, projectivity on the 0-Hecke monoid, restriction and induction along an inclusion map).Rook monoids are actually type A instances of the family of Renner monoids, which are completions of the Weyl groups (crystallographic Coxeter groups) for Zariski’s topology. In the second part of this thesis we extend our type A results to define and give a presentation of 0-Renner monoids in type B and D. This also leads to a presentation of the Renner monoids of type B and D, correcting a misleading presentation that appeared earlier in the litterature. As in type A we study the monoidal properties of the 0-Renner monoids of type B and D : they are still J-trivial but their R-order are not lattices anymore. We study nonetheless their representation theory and the restriction of projective modules over the corresponding 0-Hecke monoids. The third part of this thesis deals with different generalizations of permutations. In a recent series of papers, Châtel, Pilaud and Pons revisit the algebraic combinatorics of permutations (weak order, Malvenuto-Reutenauer Hopf algebra) in terms of the combinatorics of integer posets. This perspective encompasses as well the combinatorics of quotients of the weak order such as binary trees, binary sequences, and more generally the recent permutrees of Pilaud and Pons. We generalize the weak order on the elements of the Weyl groups. This enables us to describe the order on vertices of the permutahedra, generalized associahedra and cubes in the same unified context. These results are based on subtle properties of sums of roots in Weyl groups, and actually fail for non-crystallographic Coxeter groups.

Page generated in 0.1139 seconds