Spelling suggestions: "subject:"parameterized"" "subject:"aparameterized""
61 |
On the number of distinct squares in stringsJiang, Mei 04 1900 (has links)
<p>We investigate the problem of the maximum number of distinct primitively rooted squares in a string. In comparison to considering general strings, the number of distinct symbols in the string is introduced as an additional parameter of the problem. Let S(d,n) = max {s(x) | x is a (d,n)-string}, where s(x) denotes the number of distinct primitively rooted squares in a string x and a (d,n)-string denotes a string of length n with exactly d distinct symbols.</p> <p>Inspired by the d-step approach which was instrumental in Santos' tackling of the Hirsch conjecture, we introduce a (d,n-d) table with entries S(d,n) where d is the index for the rows and n-d is the index for the columns. We examine the properties of the S(d,n) function in the context of (d,n-d) table and conjecture that the value of S(d,n) is no more than n-d. We present several equivalent properties with the conjecture. We discuss the significance of the main diagonal of the (d,n-d) table, i.e. the square-maximal (d, 2d)-strings for their relevance to the conjectured bound for all strings. We explore their structural properties under both assumptions, complying or not complying with the conjecture, with the intention to derive a contradiction. The result yields novel properties and statements equivalent with the conjecture with computational application to the determination of the values S(d,n).</p> <p>To further populate the (d,n-d) table, we design and implement an efficient computational framework for computing S(d,n). Instead of generating all possible (d,n)-strings as the brute-force approach needs to do, the computational effort is significantly reduced by narrowing down the search space for square-maximal strings. With an easily accessible lower bound obtained either from the previously computed values inductively or by an effective heuristic search, only a relatively small set of candidate strings that might possibly exceed the lower bound is generated. To this end, the notions of s-cover and the density of a string are introduced and utilized. In special circumstances, the computational efficiency can be further improved by starting the s-cover with a double square structure. In addition, we present an auxiliary algorithm that returns the required information including the number of distinct squares for each generated candidate string. This algorithm is a modified version of FJW algorithm, an implementation based on Crochemore's partition algorithm, developed by Franek, Jiang and Weng. As of writing of this thesis, we have been able to obtain the maximum number of distinct squares in binary strings till the length of 70.</p> / Doctor of Philosophy (PhD)
|
62 |
Parameterized Verification and Synthesis for Distributed Agreement-Based SystemsNouraldin Jaber (13796296) 19 September 2022 (has links)
<p> </p>
<p>Distributed agreement-based systems use common distributed agreement protocols such as leader election and consensus as building blocks for their target functionality—processes in these systems may need to agree on a leader, on the members of a group, on owners of locks, or on updates to replicated data. Such distributed agreement-based systems are common and potentially permit modular, scalable verification approaches that mimic their modular design. Interestingly, while there are many verification efforts that target agreement protocols themselves, little attention has been given to distributed agreement-based systems that build on top of these protocols. </p>
<p>In this work, we aim to develop a fully-automated, modular, and usable parameterized verification approach for distributed agreement-based systems. To do so, we need to overcome the following challenges. First, the fully automated parameterized verification problem, i.e, the problem of algorithmically checking if the system is correct for any number of processes, is a well-known <em>undecidable </em>problem. Second, to enable modular verification that leverages the inherently-modular nature of these agreement-based systems, we need to be able to support <em>abstractions </em>of agreement protocols. Such abstractions can replace the agreement protocols’ implementations when verifying the overall system; enabling modular reasoning. Finally, even when the verification is fully automated, a system designer still needs assistance in <em>modeling </em>their distributed agreement-based systems. </p>
<p>We systematically tackle these challenges through the following contributions. </p>
<p>First, we support efficient, decidable verification of distributed agreement-based systems by developing a computational model—the GSP model—for reasoning about distributed (agreement-based) systems that admits decidability and <em>cutoff </em>results. Cutoff results enable practical verification by reducing the parameterized verification problem to the verification problem of a system with a fixed, finite number of processes. The GSP model supports generalized communication primitives and global guards, both of which are essential to enable abstractions of agreement protocols. </p>
<p>Then, we address the usability and modularity aspects by developing a framework, QuickSilver, tailored for modeling and modular parameterized verification of distributed agreement-based systems. QuickSilver provides an intuitive domain-specific language, called Mercury, that is equipped with two agreement primitives capable of abstracting away agreement protocols when modeling agreement-based systems; enabling modular verification. QuickSilver extends the decidability and cutoff results of the GSP model to provide fully automated, efficient parameterized verification for a large class of systems modeled in Mercury. </p>
<p>Finally, we leverage synthesis techniques to further enhance the usability of our approach and propose Cinnabar, a tool that supports synthesis of distributed agreement-based systems with efficiently-decidable parameterized verification. Cinnabar allows a system de- signer to provide a sketch of their Mercury model and uses a counterexample-guided synthesis procedure to search for model completions that both belong to the efficiently-decidable fragment of Mercury and are correct. </p>
<p>We evaluate our contributions on various interesting distributed agreement-based systems adapted from real-world applications, such as a data store, a lock service, a surveillance system, a pathfinding algorithm for mobile robots, and more. </p>
|
63 |
Auto-Parameterized Shape Grammar for Constructing Islamic Geometric Motif-Based StructuresSayed, Zahra, Ugail, Hassan, Palmer, Ian J., Purdy, J., Reeve, Carlton 29 June 2016 (has links)
Yes / The complex formation of Islamic Geometric Patterns (IGP)
is one of the distinctive features in Islamic art and architecture. Many
have attempted to reproduce these patterns in digital form, using various
pattern generation techniques, in 2D. Shape grammars are an e ective
pattern generation method, providing good aesthetic results. In this pa-
per we describe a novel approach in generating 3D IGP using the shape
grammar method. The particular emphasis here is to generate the motifs
(repeated units with the pattern) in 3D using parameterization. These
can then be manipulated within the 3D space to construct architec-
tural structures. In this work we have developed two distinctive Shape
Grammars in 3D namely Parameterized Shape Grammar (PSG) and
Auto-Parameterized Shape Grammar (APSG). Here the PSG generates
the motifs and the APSG enables construction of the structures using
the generated motifs. Both grammars are practically implemented as a
3D modelling tool within Autodesk Maya. The parameterization within
each grammar is the key to generate both Islamic geometric motifs and
Islamic geometric motif-based structures.
|
64 |
Finding Interesting Subgraphs with GuaranteesCadena, Jose 29 January 2018 (has links)
Networks are a mathematical abstraction of the interactions between a set of entities, with extensive applications in social science, epidemiology, bioinformatics, and cybersecurity, among others. There are many fundamental problems when analyzing network data, such as anomaly detection, dense subgraph mining, motif finding, information diffusion, and epidemic spread. A common underlying task in all these problems is finding an "interesting subgraph"; that is, finding a part of the graph---usually small relative to the whole---that optimizes a score function and has some property of interest, such as connectivity or a minimum density.
Finding subgraphs that satisfy common constraints of interest, such as the ones above, is computationally hard in general, and state-of-the-art algorithms for many problems in network analysis are heuristic in nature. These methods are fast and usually easy to implement. However, they come with no theoretical guarantees on the quality of the solution, which makes it difficult to assess how the discovered subgraphs compare to an optimal solution, which in turn affects the data mining task at hand. For instance, in anomaly detection, solutions with low anomaly score lead to sub-optimal detection power. On the other end of the spectrum, there have been significant advances on approximation algorithms for these challenging graph problems in the theoretical computer science community. However, these algorithms tend to be slow, difficult to implement, and they do not scale to the large datasets that are common nowadays.
The goal of this dissertation is developing scalable algorithms with theoretical guarantees for various network analysis problems, where the underlying task is to find subgraphs with constraints. We find interesting subgraphs with guarantees by adapting techniques from parameterized complexity, convex optimization, and submodularity optimization. These techniques are well-known in the algorithm design literature, but they lead to slow and impractical algorithms. One unifying theme in the problems that we study is that our methods are scalable without sacrificing the theoretical guarantees of these algorithm design techniques. We accomplish this combination of scalability and rigorous bounds by exploiting properties of the problems we are trying to optimize, decomposing or compressing the input graph to a manageable size, and parallelization.
We consider problems on network analysis for both static and dynamic network models. And we illustrate the power of our methods in applications, such as public health, sensor data analysis, and event detection using social media data. / Ph. D. / Networks are a mathematical abstraction of the interactions between a set of entities, with extensive applications in social science, epidemiology, bioinformatics, and cybersecurity, among others. There are many fundamental problems when analyzing network data, such as anomaly detection, dense subgraph mining, motif finding, information diffusion, and epidemic spread. A common underlying task in all these problems is finding an “interesting subgraph”; that is, finding a part of the graph—usually small relative to the whole—that optimizes a score function and has some property of interest, such as being connected.
Finding subgraphs that satisfy common constraints of interest is computationally hard, and existing techniques for many problems of this kind are heuristic in nature. Heuristics are fast and usually easy to implement. However, they come with no theoretical guarantees on the quality of the solution, which makes it difficult to assess how the discovered subgraphs compare to an optimal solution, which in turn affects the data mining task at hand. For instance, in anomaly detection, solutions with low anomaly score lead to sub-optimal detection power. On the other end of the spectrum, there have been significant progress on these challenging graph problems in the theoretical computer science community. However, these techniques tend to be slow, difficult to implement, and they do not scale to the large datasets that are common nowadays.
The goal of this dissertation is developing scalable algorithms with theoretical guarantees for various network analysis problems, where the underlying task is to find subgraphs with constraints. One unifying theme in the problems that we study is that our methods are scalable without sacrificing theoretical guarantees. We accomplish this combination of scalability and rigorous bounds by exploiting properties of the problems we are trying to optimize, decomposing or compressing the input graph to a manageable size, and parallelization.
We consider problems on network analysis for both static and dynamic network models. And we illustrate the power of our methods in applications, such as public health, sensor data analysis, and event detection using social media data.
|
65 |
Problèmes d'optimisation avec propagation dans les graphes : complexité paramétrée et approximation / Optimization problems with propagation in graphs : Parameterized complexity and approximationChopin, Morgan 05 July 2013 (has links)
Dans cette thèse, nous étudions la complexité algorithmique de problèmes d'optimisation impliquant un processus de diffusion dans un graphe. Plus précisément, nous nous intéressons tout d'abord au problème de sélection d'un ensemble cible. Ce problème consiste à trouver le plus petit ensemble de sommets d'un graphe à “activer” au départ tel que tous les autres sommets soient activés après un nombre fini d'étapes de propagation. Si nous modifions ce processus en permettant de “protéger” un sommet à chaque étape, nous obtenons le problème du pompier dont le but est de minimiser le nombre total de sommets activés en protégeant certains sommets. Dans ce travail, nous introduisons et étudions une version généralisée de ce problème dans laquelle plus d'un sommet peut être protégé à chaque étape. Nous proposons plusieurs résultats de complexité pour ces problèmes à la fois du point de vue de l'approximation mais également de la complexité paramétrée selon des paramètres standards ainsi que des paramètres liés à la structure du graphe. / In this thesis, we investigate the computational complexity of optimization problems involving a “diffusion process” in a graph. More specifically, we are first interested to the target set selection problem. This problem consists of finding the smallest set of initially “activated” vertices of a graph such that all the other vertices become activated after a finite number of propagation steps. If we modify this process by allowing the possibility of ``protecting'' a vertex at each step, we end up with the firefighter problem that asks for minimizing the total number of activated vertices by protecting some particular vertices. In fact, we introduce and study a generalized version of this problem where more than one vertex can be protected at each step. We propose several complexity results for these problems from an approximation point of view and a parameterized complexity perspective according to standard parameterizations as well as parameters related to the graph structure.
|
66 |
Aspects algorithmiques de la comparaison d'éléments biologiques / Algorithmics aspects of biological entities comparisonSikora, Florian 30 September 2011 (has links)
Pour mieux saisir les liens complexes entre génotype et phénotype, une méthode utilisée consiste à étudier les relations entre différents éléments biologiques (entre les protéines, entre les métabolites...). Celles-ci forment ce qui est appelé un réseau biologique, que l'on représente algorithmiquement par un graphe. Nous nous intéressons principalement dans cette thèse au problème de la recherche d'un motif (multi-ensemble de couleurs) dans un graphe coloré, représentant un réseau biologique. De tels motifs correspondent généralement à un ensemble d'éléments conservés au cours de l'évolution et participant à une même fonction biologique. Nous continuons l'étude algorithmique de ce problème et de ses variantes (qui admettent plus de souplesse biologique), en distinguant les instances difficiles algorithmiquement et en étudiant différentes possibilités pour contourner cette difficulté (complexité paramétrée, réduction d'instance, approximation...). Nous proposons également un greffon intégré au logiciel Cytoscape pour résoudre efficacement ce problème, que nous testons sur des données réelles.Nous nous intéressons également à différents problèmes de génomique comparative. La démarche scientifique adoptée reste la même: depuis une formalisation d'un problème biologique, déterminer ses instances difficiles algorithmiquement et proposer des solutions pour contourner cette difficulté (ou prouver que de telles solutions sont impossibles à trouver sous des hypothèses fortes) / To investigate the complex links between genotype and phenotype, one can study the relations between different biological entities. It forms a biological network, represented by a graph. In this thesis, we are interested in the occurrence of a motif (a multi-set of colors) in a vertex-colored graph, representing a biological network. Such motifs usually correspond to a set of elements realizing a same function, and which may have been evolutionarily preserved. We follow the algorithmic study of this problem, by establishing hard instances and studying possibilities to cope with the hardness (parameterized complexity, preprocessing, approximation...). We also develop a plugin for Cytoscape, in order to solve efficiently this problem and to test it on real data.We are also interested in different problems related to comparative genomics. The scientific method is the same: studying problems arising from biology, specifying the hard instances and giving solutions to cope with the hardness (or proving such solutions are unlikely)
|
67 |
Inférence d'invariants pour le model checking de systèmes paramétrés / Invariants inference for model checking of parameterized systemsMebsout, Alain 29 September 2014 (has links)
Cette thèse aborde le problème de la vérification automatique de systèmesparamétrés complexes. Cette approche est importante car elle permet de garantircertaines propriétés sans connaître a priori le nombre de composants dusystème. On s'intéresse en particulier à la sûreté de ces systèmes et on traitele côté paramétré du problème avec des méthodes symboliques. Ces travauxs'inscrivent dans le cadre théorique du model checking modulo théories et ontdonné lieu à un nouveau model checker : Cubicle.Une des contributions principale de cette thèse est une nouvelle technique pourinférer des invariants de manière automatique. Le processus de générationd'invariants est intégré à l'algorithme de model checking et permet de vérifieren pratique des systèmes hors de portée des approches symboliquestraditionnelles. Une des applications principales de cet algorithme estl’analyse de sûreté paramétrée de protocoles de cohérence de cache de tailleindustrielle.Enfin, pour répondre au problème de la confiance placée dans le model checker,on présente deux techniques de certification de notre outil Cubicle utilisantla plate-forme Why3. La première consiste à générer des certificats dont lavalidité est évaluée de manière indépendante tandis que la seconde est uneapproche par vérification déductive du cœur de Cubicle. / This thesis tackles the problem of automatically verifying complexparameterized systems. This approach is important because it can guarantee thatsome properties hold without knowing a priori the number of components in thesystem. We focus in particular on the safety of such systems and we handle theparameterized aspect with symbolic methods. This work is set in the theoreticalframework of the model checking modulo theories and resulted in a new modelchecker: Cubicle.One of the main contribution of this thesis is a novel technique forautomatically inferring invariants. The process of invariant generation isintegrated with the model checking algorithm and allows the verification inpractice of systems which are out of reach for traditional symbolicapproaches. One successful application of this algorithm is the safety analysisof industrial size parameterized cache coherence protocols.Finally, to address the problem of trusting the answer given by the modelchecker, we present two techniques for certifying our tool Cubicle based on theframework Why3. The first consists in producing certificates whose validity canbe assessed independently while the second is an approach by deductiveverification of the heart of Cubicle.
|
68 |
Identification of Stiffness Reductions Using Partial Natural Frequency DataSokheang Thea (6620237) 15 May 2019 (has links)
In vibration-based damage detection in structures, often changes in the
dynamic properties such as natural frequencies, modeshapes, and derivatives of
modeshapes are used to identify the damaged elements. If only a partial list of
natural frequencies is known, optimization methods may need to be used to
identify the damage. In this research, the algorithm proposed by Podlevskyi & Yaroshko (2013) is used to determine
the stiffness distribution in shear building models. The lateral load resisting
elements are presented as a single equivalent spring, and masses are lumped at
floor levels. The proposed method calculates stiffness values directly, i.e., without optimization, from the known partial list of natural frequency data and mass
distribution. It is shown that if the number of stories with reduced
stiffness is smaller than the number of known natural frequencies, the stories
with reduced stiffnesses can be identified. Numerical studies on building
models with two stories and four stories are used to illustrate the solution
method. Effect of error or noise in given natural frequencies on stiffness
estimates and, conversely, sensitivity of natural frequencies to changes in
stiffness are studied using 7-, 15-, 30-, and 50-story numerical models. From
the studies, it is learnt that as the number of stories increases, the natural
frequencies become less sensitive to stiffness changes. Additionally, eight
laboratory experiments were conducted on a five-story aluminum structural
model. Ten slender columns were used in each story of the specimen. Damage was
simulated by removing columns in one, two, or three stories. The method can
locate and quantify the damage in cases presented in the experimental studies. It
is also applied to a 1/3 scaled 18-story steel moment frame building tested on
an earthquake simulator (Suita et al., 2015) to identify the reduction in the
stiffness due to fractures of beam flanges. Only the first two natural
frequencies are used to determine the reductions in the stiffness since the
third mode of the tower is torsional and no reasonable planar spring-mass model
can be developed to present all of the translational modes. The method produced possible cases of the
softening when the damage was assumed to occur at a single story.
|
69 |
Verification of Parameterized and Timed Systems : Undecidability Results and Efficient MethodsDeneux, Johann January 2006 (has links)
<p>Software is finding its way into an increasing range of devices (phones, medical equipment, cars...). A challenge is to design <i>verification</i> methods to ensure correctness of software. </p><p>We focus on <i>model checking</i>, an approach in which an abstract model of the implementation and a specification of requirements are provided. The task is to answer automatically whether the system conforms with its specification.We concentrate on (i) timed systems, and (ii) parameterized systems.</p><p><i>Timed systems </i>can be modeled and verified using the classical model of timed automata. Correctness is translated to language inclusion between two timed automata representing the implementation and the specification. We consider variants of timed automata, and show that the problem is at best highly complex, at worst undecidable.</p><p>A <i>parameterized system</i> contains a variable number of components. The problem is to verify correctness regardless of the number of components. <i>Regular model checking</i> is a prominent method which uses finite-state automata. We present a semi-symbolic minimization algorithm combining the partition refinement algorithm by Paige and Tarjan with decision diagrams.</p><p>Finally, we consider systems which are both timed and parameterized: <i>Timed Petri Nets</i> (<i>TPNs</i>), and <i>Timed Networks</i> (<i>TNs</i>). We present a method for checking safety properties of TPNs based on forward reachability analysis with acceleration. We show that verifying safety properties of TNs is undecidable when each process has at least two clocks, and explore decidable variations of this problem.</p>
|
70 |
Technology Characterization Models and Their Use in Designing Complex SystemsParker, Robert Reed 2011 May 1900 (has links)
When systems designers are making decisions about which components or technologies to select for a design, they often use experience or intuition to select one technology over another. Additionally, developers of new technologies rarely provide more information about their inventions than discrete data points attained in testing, usually in a laboratory. This makes it difficult for system designers to select newer technologies in favor of proven ones. They lack the knowledge about these new technologies to consider them equally with existing technologies. Prior research suggests that set-based design representations can be useful for facilitating collaboration among engineers in a design project, both within and across organizational boundaries. However, existing set-based methods are limited in terms of how the sets are constructed and in terms of the representational capability of the sets. The goal of this research is to introduce and demonstrate new, more general set-based design methods that are effective for characterizing and comparing competing technologies in a utility-based decision framework. To demonstrate the new methods and compare their relative strengths and weaknesses, different technologies for a power plant condenser are compared. The capabilities of different condenser technologies are characterized in terms of sets defined over the space of common condenser attributes (cross sectional area, heat exchange effectiveness, pressure drop, etc.). It is shown that systems designers can use the resulting sets to explore the space of possible condenser designs quickly and effectively. It is expected that this technique will be a useful tool for system designers to evaluate new technologies and compare them to existing ones, while also encouraging the use of new technologies by providing a more accurate representation of their capabilities. I compare four representational methods by measuring the solution accuracy (compared to a more comprehensive optimization procedure's solution), computation time, and scalability (how a model changes with different data sizes). My results demonstrate that a support vector domain description-based method provides the best combination of these traits for this example. When combined with recent research on reducing its computation time, this method becomes even more favorable.
|
Page generated in 0.0721 seconds