• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 7
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Parameterized Shape Grammar for n-fold Generating Islamic Geometric Motifs

Sayed, Zahra, Ugail, Hassan, Palmer, Ian J., Purdy, J., Reeve, Carlton January 2015 (has links)
No / The complex formation of Islamic geometric Patterns (IGP) is one of the distinctive features in Islamic art and architecture. Many have attempted to reproduce these patterns in digital form, using various pattern generation techniques. Shape grammars are an effective pattern generation method, providing good aesthetic results. In this paper we present a novel approach in generating 3D IGP using an extended shape grammar: Parametrized Shape Grammar (PSG). PSG allows a user to generate both original and novel forms of Islamic geometric motifs (a repeated unit of a pattern). PSG is generalized to generate n-fold Islamic geometric motifs in a 3D environment, practically implemented as a 3D modeling tool within Autodesk Maya. The parametrization within each grammar rule is the key to generating numerous original and novel Islamic geometric motifs.
2

(Méta)-noyaux constructifs et linéaires dans les graphes peu denses / Constructive and Linear (Meta)-Kernelisations on Sparse Graphs

Garnero, Valentin 04 July 2016 (has links)
En algorithmique et en complexité, la plus grande part de la recherche se base sur l’hypothèse que P ≠ NP (Polynomial time et Non deterministic Polynomial time), c'est-à-dire qu'il existe des problèmes dont la solution peut être vérifiée mais non construite en temps polynomial. Si cette hypothèse est admise, de nombreux problèmes naturels ne sont pas dans P (c'est-à-dire, n'admettent pas d'algorithme efficace), ce qui a conduit au développement de nombreuses branches de l'algorithmique. L'une d'elles est la complexité paramétrée. Elle propose des algorithmes exacts, dont l'analyse est faite en fonction de la taille de l'instance et d'un paramètre. Ce paramètre permet une granularité plus fine dans l'analyse de la complexité.Un algorithme sera alors considéré comme efficace s'il est à paramètre fixé, c'est-à-dire, lorsque sa complexité est exponentielle en fonction du paramètre et polynomiale en fonction de la taille de l'instance. Ces algorithmes résolvent les problèmes de la classe FPT (Fixed Parameter Tractable).L'extraction de noyaux est une technique qui permet, entre autre, d’élaborer des algorithmes à paramètre fixé. Elle peut être vue comme un pré-calcul de l'instance, avec une garantie sur la compression des données. Plus formellement, une extraction de noyau est une réduction polynomiale depuis un problème vers lui même, avec la contrainte supplémentaire que la taille du noyau (l'instance réduite) est bornée en fonction du paramètre. Pour obtenir l’algorithme à paramètre fixé, il suffit de résoudre le problème dans le noyau, par exemple par une recherche exhaustive (de complexité exponentielle, en fonction du paramètre). L’existence d'un noyau implique donc l'existence d'un algorithme à paramètre fixé, la réciproque est également vraie. Cependant, l’existence d'un algorithme à paramètre fixé efficace ne garantit pas un petit noyau, c'est a dire un noyau dont la taille est linéaire ou polynomiale. Sous certaines hypothèses, il existe des problèmes n’admettant pas de noyau (c'est-à-dire hors de FPT) et il existe des problèmes de FPT n’admettant pas de noyaux polynomiaux.Un résultat majeur dans le domaine des noyaux est la construction d'un noyau linéaire pour le problème Domination dans les graphes planaires, par Alber, Fellows et Niedermeier.Tout d'abord, la méthode de décomposition en régions proposée par Alber, Fellows et Niedermeier, a permis de construire de nombreux noyaux pour des variantes de Domination dans les graphes planaires. Cependant cette méthode comportait un certain nombre d’imprécisions, ce qui rendait les preuves invalides. Dans la première partie de notre thèse, nous présentons cette méthode sous une forme plus rigoureuse et nous l’illustrons par deux problèmes : Domination Rouge Bleue et Domination Totale.Ensuite, la méthode a été généralisée, d'une part, sur des classes de graphes plus larges (de genre borné, sans-mineur, sans-mineur-topologique), d'autre part, pour une plus grande variété de problèmes. Ces méta-résultats prouvent l’existence de noyaux linéaires ou polynomiaux pour tout problème vérifiant certaines conditions génériques, sur une classe de graphes peu denses. Cependant, pour atteindre une telle généralité, il a fallu sacrifier la constructivité des preuves : les preuves ne fournissent pas d'algorithme d'extraction constructif et la borne sur le noyau n'est pas explicite. Dans la seconde partie de notre thèse nous effectuons un premier pas vers des méta-résultats constructifs ; nous proposons un cadre général pour construire des noyaux linéaires en nous inspirant des principes de la programmation dynamique et d'un méta-résultat de Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh et Thilikos. / In the fields of Algorithmic and Complexity, a large area of research is based on the assumption that P ≠ NP(Polynomial time and Non deterministic Polynomial time), which means that there are problems for which a solution can be verified but not constructed in polynomial time. Many natural problems are not in P, which means, that they have no efficient algorithm. In order to tackle such problems, many different branches of Algorithmic have been developed. One of them is called Parametric Complexity. It consists in developing exact algorithms whose complexity is measured as a function of the size of the instance and of a parameter. Such a parameter allows a more precise analysis of the complexity. In this context, an algorithm will be considered to be efficient if it is fixed parameter tractable (fpt), that is, if it has a complexity which is exponential in the parameter and polynomial in the size of the instance. Problems that can be solved by such an algorithm form the FPT class.Kernelisation is a technical that produces fpt algorithms, among others. It can be viewed as a preprocessing of the instance, with a guarantee on the compression of the data. More formally, a kernelisation is a polynomial reduction from a problem to itself, with the additional constraint that the size of the kernel, the reduced instance, is bounded by a function of the parameter. In order to obtain an fpt algorithm, it is sufficient to solve the problem in the reduced instance, by brute-force for example (which has exponential complexity, in the parameter). Hence, the existence of a kernelisiation implies the existence of an fpt algorithm. It holds that the converse is true also. Nevertheless, the existence of an efficient fpt algorithm does not imply a small kernel, meaning a kernel with a linear or polynomial size. Under certain hypotheses, it can be proved that some problems can not have a kernel (that is, are not in FPT) and that some problems in FPT do not have a polynomial kernel.One of the main results in the field of Kernelisation is the construction of a linear kernel for the Dominating Set problem on planar graphs, by Alber, Fellows and Niedermeier.To begin with, the region decomposition method proposed by Alber, Fellows and Niedermeier has been reused many times to develop kernels for variants of Dominating Set on planar graphs. Nevertheless, this method had quite a few inaccuracies, which has invalidated the proofs. In the first part of our thesis, we present a more thorough version of this method and we illustrate it with two examples: Red Blue Dominating Set and Total Dominating Set.Next, the method has been generalised to larger classes of graphs (bounded genus, minor-free, topological-minor-free), and to larger families of problems. These meta-results prove the existence of a linear or polynomial kernel for all problems verifying some generic conditions, on a class of sparse graphs. As a price of generality, the proofs do not provide constructive algorithms and the bound on the size of the kernel is not explicit. In the second part of our thesis, we make a first step to constructive meta-results. We propose a framework to build linear kernels based on principles of dynamic programming and a meta-result of Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh and Thilikos.
3

Algorithmic multiparameterised verification of safety properties:process algebraic approach

Siirtola, A. (Antti) 28 September 2010 (has links)
Abstract Due to increasing amount of concurrency, systems have become difficult to design and analyse. In this effort, formal verification, which means proving the correctness of a system, has turned out to be useful. Unfortunately, the application domain of the formal verification methods is often indefinite, tools are typically unavailable, and most of the techniques do not suit especially well for the verification of software systems. These are the questions addressed in the thesis. A typical approach to modelling systems and specifications is to consider them parameterised by the restrictions of the execution environment, which results in an (infinite) family of finite-state verification tasks. The thesis introduces a novel approach to the verification of such infinite specification-system families represented as labelled transition systems (LTSs). The key idea is to exploit the algebraic properties of the correctness relation. They allow the correctness of large system instances to be derived from that of smaller ones and, in the best case, an infinite family of finite-state verification tasks to be reduced to a finite one, which can then be solved using existing tools. The main contribution of the thesis is an algorithm that automates the reduction method. A specification and a system are given as parameterised LTSs and the allowed parameter values are encoded using first order logic. Parameters are sets and relations over these sets, which are typically used to denote, respectively, identities of replicated components and relationships between them. Because the number of parameters is not limited and they can be nested as well, one can express multiply parameterised systems with a parameterised substructure, which is an essential property from the viewpoint of modelling software systems. The algorithm terminates on all inputs, so its application domain is explicit in this sense. Other proposed parameterised verification methods do not have both these features. Moreover, some of the earlier results on the verification of parameterised systems are obtained as a special case of the results presented here. Finally, several natural and significant extensions to the formalism are considered, and it is shown that the problem becomes undecidable in each of the cases. Therefore, the algorithm cannot be significantly extended in any direction without simultaneously restricting some other aspect.
4

Graph colourings and games

Meeks, Kitty M. F. T. January 2012 (has links)
Graph colourings and combinatorial games are two very widely studied topics in discrete mathematics. This thesis addresses the computational complexity of a range of problems falling within one or both of these subjects. Much of the thesis is concerned with the computational complexity of problems related to the combinatorial game (Free-)Flood-It, in which players aim to make a coloured graph monochromatic ("flood" the graph) with the minimum possible number of flooding operations; such problems are known to be computationally hard in many cases. We begin by proving some general structural results about the behaviour of the game, including a powerful characterisation of the number of moves required to flood a graph in terms of the number of moves required to flood its spanning trees; these structural results are then applied to prove tractability results about a number of flood-filling problems. We also consider the computational complexity of flood-filling problems when the game is played on a rectangular grid of fixed height (focussing in particular on 3xn and 2xn grids), answering an open question of Clifford, Jalsenius, Montanaro and Sach. The final chapter concerns the parameterised complexity of list problems on graphs of bounded treewidth. We prove structural results determining the list edge chromatic number and list total chromatic number of graphs with bounded treewidth and large maximum degree, which are special cases of the List (Edge) Colouring Conjecture and Total Colouring Conjecture respectively. Using these results, we show that the problem of determining either of these quantities is fixed parameter tractable, parameterised by the treewidth of the input graph. Finally, we analyse a list version of the Hamilton Path problem, and prove it to be W[1]-hard when parameterised by the pathwidth of the input graph. These results answer two open questions of Fellows, Fomin, Lokshtanov, Rosamond, Saurabh, Szeider and Thomassen.
5

Simulating South African Climate with a Super parameterized Community Atmosphere Model (SP-CAM)

Dlamini, Nohlahla January 2019 (has links)
MENVSC / Department of Geography and Geo-Information Sciences / The process of cloud formation and distribution in the atmospheric circulation system is very important yet not easy to comprehend and forecast. Clouds affect the climate system by controlling the amount of solar radiation, precipitation and other climatic variables. Parameterised induced General Circulation Model (GCMs) are unable to represent clouds and aerosol particles explicitly and their influence on the climate and are thought to be responsible for most of the uncertainty in climate predictions. Therefore, the aim of the study is to investigate the climate of South Africa as simulated by Super Parameterised Community Atmosphere Model (SPCAM) for the period of 1987-2016. Community Atmosphere Model (CAM) and SPCAM datasets used in the study were obtained from Colorado State University (CSU), whilst dynamic and thermodynamic fields were obtained from the NCEP reanalysis ll. The simulations were compared against rainfall and temperature observations obtained from the South African Weather Service (SAWS) database. The accuracy of the model output from CAM and SPCAM was tested in simulating rainfall and temperature at seasonal timescales using the Root Mean Square Error (RMSE). It was found that CAM overestimates rainfall over the interior of the subcontinent during December - February (DJF) season whilst SPCAM showed a high performance in depicting summer rainfall particularly in the central and eastern parts of South Africa. During June – August (JJA), both configurations (CAM and SPCAM) had a dry bias with simulating winter rainfall over the south Western Cape region in cases of little rainfall in the observations. CAM was also found to underestimate temperatures during DJF with SPCAM results closer to the reanalysis. The study further analyzed inter-annual variability of rainfall and temperature for different homogenous regions across the whole of South Africa using both configurations. It was found that SPCAM had a higher skill than CAM in simulating inter-annual variability of rainfall and temperature over the summer rainfall regions of South Africa for the period of 1987 to 2016. SPCAM also showed reasonable skill simulating (mean sea level pressure, geopotential height, omega etc) in contrast to the standard CAM for all seasons at the low and middle levels (850 hPa and 500 hPa). The study also focused on major El Niño Southern Oscillation (ENSO) events and found that SPCAM tended to compare better in general with the observations. Although both versions of the model still feature substantial biases in simulating South African climate variables (rainfall, temperature, etc), the magnitude of the biases are generally smaller in the super parameterized CAM than the default CAM, suggesting that the implementation of the super parameterization in CAM improves the model performance and therefore seasonal climate prediction. / NRF
6

Flood Suspended Sediment Transport: Combined Modelling from Dilute to Hyper-concentrated Flow

Pu, Jaan H., Wallwork, Joseph T., Khan, M.A., Pandey, M., Pourshahbaz, H., Satyanaga, A., Hanmaiahgari, P.R., Gough, Timothy D. 15 February 2021 (has links)
Yes / During flooding, the suspended sediment transport usually experiences a wide-range of dilute to hyper-concentrated suspended sediment transport depending on the local flow and ground con-ditions. This paper assesses the distribution of sediment for a variety of hyper-concentrated and dilute flows. Due to the differences between hyper-concentrated and dilute flows, a linear-power coupled model is proposed to integrate these considerations. A parameterised method combining the sediment size, Rouse number, mean concentration, and flow depth parameters has been used for modelling the sediment profile. The accuracy of the proposed model has been verified against the reported laboratory measurements and comparison with other published analytical methods. The proposed method has been shown to effectively compute the concentration profile for a wide range of suspended sediment conditions from hyper-concentrated to dilute flows. Detailed com-parisons reveal that the proposed model calculates the dilute profile with good correspondence to the measured data and other modelling results from literature. For the hyper-concentrated profile, a clear division of lower (bed-load) to upper layer (suspended-load) transport can be observed in the measured data. Using the proposed model, the transitional point from this lower to upper layer transport can be calculated precisely.
7

Functional description of sequence constraints and synthesis of combinatorial objects / Description fonctionnelle de contraintes sur des séquences et synthèse d’objets combinatoires

Arafailova, Ekaterina 25 September 2018 (has links)
A l’opposé de l’approche consistant à concevoir aucas par cas des contraintes et des algorithmes leur étant dédiés, l’objet de cette thèse concerne d’une part la description de familles de contraintes en termes de composition de fonctions, et d’autre part la synthèse d’objets combinatoires pour de telles contraintes. Les objets concernés sont des bornes précises, des coupes linéaires, des invariants non-linéaires et des automates finis ; leur but principal est de prendre en compte l’aspect combinatoire d’une seule contrainte ou d’une conjonction de contraintes. Ces objets sont obtenus d’une façon systématique et sont paramétrés par une ou plusieurs contraintes, par le nombre de variables dans une séquence, et par les domaines initiaux de ces variables. Cela nous permet d’obtenir des objets indépendants d’une instance considérée. Afin de synthétiser des objets combinatoires nous tirons partie de la vue déclarative de telles contraintes, basée sur les expressions régulières, ainsi que la vue opérationnelle, basée sur les automates à registres et les transducteurs finis. Il y a plusieurs avantages à synthétiser des objets combinatoires par rapport à la conception d’algorithmes dédiés : 1) on peut utiliser ces formules paramétrées dans plusieurs contextes, y compris la programmation par contraintes et la programmation linéaire, ce qui est beaucoup plus difficile avec des algorithmes ; 2) la synergie entre des objets combinatoires nous donne une meilleure performance en pratique ; 3) les quantités calculées par certaines des formules peuvent être utilisées non seulement dans le contexte de l’optimisation mais aussi pour la fouille de données. / Contrary to the standard approach consisting in introducing ad hoc constraints and designing dedicated algorithms for handling their combinatorial aspect, this thesis takes another point of view. On the one hand, it focusses on describing a family of sequence constraints in a compositional way by multiple layers of functions. On the other hand, it addresses the combinatorial aspect of both a single constraint and a conjunction of such constraints by synthesising compositional combinatorial objects, namely bounds, linear inequalities, non-linear constraints and finite automata. These objects are obtained in a systematic way and are not instance-specific: they are parameterised by one or several constraints, by the number of variables in a considered sequence of variables, and by the initial domains of the variables. When synthesising such objects we draw full benefit both from the declarative view of such constraints, based on regular expressions, and from the operational view, based on finite transducers and register automata.There are many advantages of synthesising combinatorial objects rather than designing dedicated algorithms: 1) parameterised formulae can be applied in the context of several resolution techniques such as constraint programming or linear programming, whereas algorithms are typically tailored to a specific technique; 2) combinatorial objects can be combined together to provide better performance in practice; 3) finally, the quantities computed by some formulae cannot just be used in an optimisation setting, but also in the context of data mining.

Page generated in 0.0884 seconds