141 |
Conception, développement et analyse de systèmes de fonction booléennes décrivant les algorithmes de chiffrement et de déchiffrement de l'Advanced Encryption Standard / Design, development and analysis of Boolean function systems describing the encryption and decryption algorithms of the Advanced Encryption StandardDubois, Michel 24 July 2017 (has links)
La cryptologie est une des disciplines des mathématiques, elle est composée de deux sous-ensembles: la cryptographie et la cryptanalyse. Tandis que la cryptographie s'intéresse aux algorithmes permettant de modifier une information afin de la rendre inintelligible sans la connaissance d'un secret, la seconde s'intéresse aux méthodes mathématiques permettant de recouvrer l'information originale à partir de la seule connaissance de l'élément chiffré.La cryptographie se subdivise elle-même en deux sous-ensembles: la cryptographie symétrique et la cryptographie asymétrique. La première utilise une clef identique pour les opérations de chiffrement et de déchiffrement, tandis que la deuxième utilise une clef pour le chiffrement et une autre clef, différente de la précédente, pour le déchiffrement. Enfin, la cryptographie symétrique travaille soit sur des blocs d'information soit sur des flux continus d'information. Ce sont les algorithmes de chiffrement par blocs qui nous intéressent ici.L'objectif de la cryptanalyse est de retrouver l'information initiale sans connaissance de la clef de chiffrement et ceci dans un temps plus court que l'attaque par force brute. Il existe de nombreuses méthodes de cryptanalyse comme la cryptanalyse fréquentielle, la cryptanalyse différentielle, la cryptanalyse intégrale, la cryptanalyse linéaire...Beaucoup de ces méthodes sont maintenues en échec par les algorithmes de chiffrement modernes. En effet, dans un jeu de la lance et du bouclier, les cryptographes développent des algorithmes de chiffrement de plus en plus efficaces pour protéger l'information chiffrée d'une attaque par cryptanalyse. C'est le cas notamment de l'Advanced Encryption Standard (AES). Cet algorithme de chiffrement par blocs a été conçu par Joan Daemen et Vincent Rijmen et transformé en standard par le National Institute of Standards and Technology (NIST) en 2001. Afin de contrer les méthodes de cryptanalyse usuelles les concepteurs de l'AES lui ont donné une forte structure algébrique.Ce choix élimine brillamment toute possibilité d'attaque statistique, cependant, de récents travaux tendent à montrer, que ce qui est censé faire la robustesse de l'AES, pourrait se révéler être son point faible. En effet, selon ces études, cryptanalyser l'AES se ``résume'' à résoudre un système d'équations quadratiques symbolisant la structure du chiffrement de l'AES. Malheureusement, la taille du système d'équations obtenu et le manque d'algorithmes de résolution efficaces font qu'il est impossible, à l'heure actuelle, de résoudre de tels systèmes dans un temps raisonnable.L'enjeu de cette thèse est, à partir de la structure algébrique de l'AES, de décrire son algorithme de chiffrement et de déchiffrement sous la forme d'un nouveau système d'équations booléennes. Puis, en s'appuyant sur une représentation spécifique de ces équations, d'en réaliser une analyse combinatoire afin d'y détecter d'éventuels biais statistiques. / Cryptology is one of the mathematical fields, it is composed of two subsets: cryptography and cryptanalysis. While cryptography focuses on algorithms to modify an information by making it unintelligible without knowledge of a secret, the second focuses on mathematical methods to recover the original information from the only knowledge of the encrypted element.Cryptography itself is subdivided into two subsets: symmetric cryptography and asymmetric cryptography. The first uses the same key for encryption and decryption operations, while the second uses one key for encryption and another key, different from the previous one, for decryption. Finally, symmetric cryptography is working either on blocks of information either on continuous flow of information. These are algorithms block cipher that interests us here.The aim of cryptanalysis is to recover the original information without knowing the encryption key and this, into a shorter time than the brute-force attack. There are many methods of cryptanalysis as frequency cryptanalysis, differential cryptanalysis, integral cryptanalysis, linear cryptanalysis...Many of these methods are defeated by modern encryption algorithms. Indeed, in a game of spear and shield, cryptographers develop encryption algorithms more efficient to protect the encrypted information from an attack by cryptanalysis. This is the case of the Advanced Encryption Standard (AES). This block cipher algorithm was designed by Joan Daemen and Vincent Rijmen and transformed into standard by the National Institute of Standards and Technology (NIST) in 2001. To counter the usual methods of cryptanalysis of AES designers have given it a strong algebraic structure.This choice eliminates brilliantly any possibility of statistical attack, however, recent work suggests that what is supposed to be the strength of the AES, could prove to be his weak point. According to these studies, the AES cryptanalysis comes down to ``solve'' a quadratic equations symbolizing the structure of the AES encryption. Unfortunately, the size of the system of equations obtained and the lack of efficient resolution algorithms make it impossible, at this time, to solve such systems in a reasonable time.The challenge of this thesis is, from the algebraic structure of the AES, to describe its encryption and decryption processes in the form of a new Boolean equations system. Then, based on a specific representation of these equations, to achieve a combinatorial analysis to detect potential statistical biases.
|
142 |
Canalização: fenótipos robustos como consequência de características da rede de regulação gênica / Canalization: phenotype robustness as consequence of characteristics of the gene regulatory networkPatricio, Vitor Hugo Louzada 20 April 2011 (has links)
Em sistemas biológicos, o estudo da estabilidade das redes de regulação gênica é visto como uma contribuição importante que a Matemática pode proporcionar a pesquisas sobre câncer e outras doenças genéticas. Neste trabalho, utilizamos o conceito de ``canalização\'\' como sinônimo de estabilidade em uma rede biológica. Como as características de uma rede de regulação canalizada ainda são superficialmente compreendidas, estudamos esse conceito sob o ponto de vista computacional: propomos um modelo matemático simplificado para descrever o fenômeno e realizamos algumas análises sobre o mesmo. Mais especificamente, a estabilidade da maior bacia de atração das redes Booleanas - um clássico paradigma para a modelagem de redes de regulação - é analisada. Os resultados indicam que a estabilidade da maior bacia de atração está relacionada com dados biológicos sobre o crescimento de colônias de leveduras e que considerações sobre a interação entre as funções Booleanas e a topologia da rede devem ser realizadas conjuntamente na análise de redes estáveis. / In biological systems, the study of gene regulatory networks stability is seen as an important contribution that Mathematics can make to cancer research and that of other genetic diseases. In this work, we consider the concept of ``canalization\'\' as a consequence of stability in gene regulatory networks. The characteristics of canalized regulatory networks are superficially understood. Hence, we study the canalization concept under a computational framework: a simplified model is proposed to describe the phenomenon using Boolean Networks - a classical paradigm to modeling regulatory networks. Specifically, the stability of the largest basin of attraction in gene regulatory networks is analyzed. Our results indicate that the stability of the largest basin of attraction is related to biological data on growth of yeast colonies, and that thoughts about the interaction between Boolean functions and network topologies must be given in the analysis of stable networks.
|
143 |
DNA-based logicBader, Antoine January 2018 (has links)
DNA nanotechnology has been developed in order to construct nanostructures and nanomachines by virtue of the programmable self-assembly properties of DNA molecules. Although DNA nanotechnology initially focused on spatial arrangement of DNA strands, new horizons have been explored owing to the development of the toehold-mediated strand-displacement reaction, conferring new dynamic properties to previously static and rigid structures. A large variety of DNA reconfigurable nanostructures, stepped and autonomous nanomachines and circuits have been operated using the strand-displacement reaction. Biological systems rely on information processing to guide their behaviour and functions. Molecular computation is a branch of DNA nanotechnology that aims to construct and operate programmable computing devices made out of DNA that could interact in a biological context. Similar to conventional computers, the computational processes involved are based on Boolean logic, a propositional language that describes statements as being true or false while connecting them with logic operators. Numerous logic gates and circuits have been built with DNA that demonstrate information processing at the molecular level. However, development of new systems is called for in order to perform new tasks of higher computational complexity and enhanced reliability. The contribution of secondary structure to the vulnerability of a toehold-sequestered device to undesired triggering of inputs was examined, giving new approaches for minimizing leakage of DNA devices. This device was then integrated as a logic component in a DNA-based computer with a retrievable memory, thus implementing two essential biological functions in one synthetic device. Additionally, G-quadruplex logic gates were developed that can be switched between two topological states in a logic fashion. Their individual responses were detected simultaneously, establishing a new approach for parallel biological computing. A new AND-NOT logic circuit based on the seesaw mechanism was constructed that, in combination with the already existing AND and OR gates, form a now complete basis set that could perform any Boolean computation. This work introduces a new mode of kinetic control over the operation of such DNA circuits. Finally, the first example of a transmembrane logic gate being operated at the single-molecule level is described. This could be used as a potential platform for biosensing.
|
144 |
Konstrukce minimálních DNF reprezentací 2-intervalových funkcí. / Konstrukce minimálních DNF reprezentací 2-intervalových funkcí.Dubovský, Jakub January 2012 (has links)
Title: A construction of minimum DNF representations of 2-interval functions Author: Jakub Dubovský Department: Dep. of Theoretical Computer Science and Mathematical Logic Supervisor: doc.RNDr.Ondřej Čepek, Ph.D. Abstract: The thesis is devoted to interval boolean functions. It is focused on construction of their representation by disjunctive normal forms with minimum number of terms. Summary of known results in this field for 1-interval functions is presented. It shows that method used to prove those results cannot be in general used for two or more interval functions. It tries to extend those results to 2-interval functions. An optimization algorithm for special subclass of them is constructed. Exact error estimation for approximation algorithm is proven. A command line software for experimentation with interval function is part of the thesis. Keywords: boolean function, interval function, representation construction, ap- proximation 1
|
145 |
Synthesis of Irreversible Incompletely Specified Multi-Output Functions to Reversible EOSOPS Circuits with PSE GatesFiszer, Robert Adrian 19 December 2014 (has links)
As quantum computers edge closer to viability, it becomes necessary to create logic synthesis and minimization algorithms that take into account the particular aspects of quantum computers that differentiate them from classical computers. Since quantum computers can be functionally described as reversible computers with superposition and entanglement, both advances in reversible synthesis and increased utilization of superposition and entanglement in quantum algorithms will increase the power of quantum computing.
One necessary component of any practical quantum computer is the computation of irreversible functions. However, very little work has been done on algorithms that synthesize and minimize irreversible functions into a reversible form. In this thesis, we present and implement a pair of algorithms that extend the best published solution to these problems by taking advantage of Product-Sum EXOR (PSE) gates, the reversible generalization of inhibition gates, which we have introduced in previous work [1,2].
We show that these gates, combined with our novel synthesis algorithms, result in much lower quantum costs over a wide variety of functions as compared to our competitors, especially on incompletely specified functions. Furthermore, this solution has applications for milti-valued and multi-output functions.
|
146 |
Testability Design and Testability Analysis of a Cube Calculus MachineZhou, Lixin 05 May 1995 (has links)
Cube Calculus is an algebraic model popular used to process and minimize Boolean functions. Cube Calculus operations are widely used in logic optimization, logic synthesis, computer image processing and recognition, machine learning, and other newly developing applications which require massive logic operations. Cube calculus operations can be implemented on conventional general-purpose computers by using the appropriate "model" and software which manipulates this model. The price that we pay for this software based approach is severe speed degradation which has made the implementation of several high-level formal systems impractical. A cube calculus machine which has a special data path designed to execute multiplevalued input, and multiple-valued output cube calculus operations is presented in this thesis. This cube calculus machine can execute cube calculus operations 10-25 times faster than the software approach. For the purpose of ensuring the manufacturing testability of the cube calculus machine, emphasize has been put on the testability design of the cube calculus machine. Testability design and testability analysis of the iterative logic unit of the cube calculus machine was accomplished. Testability design and testability analysis methods of the cube calculus machine are weli discussed in this thesis. Full-scan testability design method was used in the testability design and analysis. Using the single stuck-at fault model, a 98.30% test coverage of the cube calculus machine was achieved. A Povel testability design and testability analysis approach is also presented in this thesis.
|
147 |
Mathematical Models of the Inflammatory Response in the LungsMinucci, Sarah B 01 January 2017 (has links)
Inflammation in the lungs can occur for many reasons, from bacterial infections to stretch by mechanical ventilation. In this work we compare and contrast various mathematical models for lung injuries in the categories of acute infection, latent versus active infection, and particulate inhalation. We focus on systems of ordinary differential equations (ODEs), agent-based models (ABMs), and Boolean networks. Each type of model provides different insight into the immune response to damage in the lungs. This knowledge includes a better understanding of the complex dynamics of immune cells, proteins, and cytokines, recommendations for treatment with antibiotics, and a foundation for more well-informed experiments and clinical trials. In each chapter, we provide an in-depth analysis of one model and summaries of several others. In this way we gain a better understanding of the important aspects of modeling the immune response to lung injury and identify possible points for future research.
|
148 |
Developing a GIS-Based Decision Support Tool For Evaluating Potential Wind Farm SitesXu, Xiao Mark January 2007 (has links)
In recent years, the popularity of wind energy has grown. It is starting to play a large role in generating renewable, clean energy around the world. In New Zealand, there is increasing recognition and awareness of global warming and the pollution caused by burning fossil fuels, as well as the increased difficulty of obtaining oil from foreign sources, and the fluctuating price of non-renewable energy products. This makes wind energy a very attractive alternative to keep New Zealand clean and green. There are many issues involved in wind farm development. These issues can be grouped into two categories - economic issues and environmental issues. Wind farm developers often use site selection process to minimise the impact of these issues. This thesis aims to develop GIS based models that provide effective decision support tool for evaluating, at a regional scale, potential wind farm locations. This thesis firstly identifies common issues involved in wind farm development. Then, by reviewing previous research on wind farm site selection, methods and models used by academic and corporate sector to solve issues are listed. Criteria for an effective decision support tool are also discussed. In this case, an effective decision support tool needs to be flexible, easy to implement and easy to use. More specifically, an effective decision support tool needs to provide users the ability to identify areas that are suitable for wind farm development based on different criteria. Having established the structure and criteria for a wind farm analysis model, a GIS based tool was implemented using AML code using a Boolean logic model approach. This method uses binary maps for the final analysis. There are a total of 3645 output maps produced based on different combination of criteria. These maps can be used to conduct sensitivity analysis. This research concludes that an effective GIS analysis tool can be developed for provide effective decision support for evaluating wind farm sites.
|
149 |
Dynamics in Boolean NetworksKarlsson, Fredrik January 2005 (has links)
<p>In this thesis several random Boolean networks are simulated. Both completely computer generated network and models for biological networks are simulated. Several different tools are used to gain knowledge about the robustness. These tools are Derrida plots, noise analysis and mean probability for canalizing rules. Some simulations on how entropy works as an indicator on if a network is robust are also included. The noise analysis works by measuring the hamming distance between the state of the network when noise is applied and when no noise is applied. For many of the simulated networks two types of rules are applied: nested canalizing and flat distributed rules. The computer generated networks consists of two types of networks: scale-free and ER-networks. One of the conclusions in this report is that nested canalizing rules are often more robust than flat distributed rules. Another conclusion is that the mean probability for canalizing rules has, for flat distributed rules, a very dominating effect on if the network is robust or not. Yet another conclusion is that the probability distribution for indegrees, for flat distributed rules, has a strong effect on if a network is robust due to the connection between the probability distribution for indegrees and the mean probability for canalizing rules.</p>
|
150 |
Deriving Genetic Networks from Gene Expression Data and Prior KnowledgeLindlöf, Angelica January 2001 (has links)
In this work three different approaches for deriving genetic association networks were tested. The three approaches were Pearson correlation, an algorithm based on the Boolean network approach and prior knowledge. Pearson correlation and the algorithm based on the Boolean network approach derived associations from gene expression data. In the third approach, prior knowledge from a known genetic network of a related organism was used to derive associations for the target organism, by using homolog matching and mapping the known genetic network to the related organism. The results indicate that the Pearson correlation approach gave the best results, but the prior knowledge approach seems to be the one most worth pursuing
|
Page generated in 0.0267 seconds