Spelling suggestions: "subject:"logarithm"" "subject:"logarithme""
61 |
Modelos de efeito Allee e epidemiológicos de tuberculose / Allee effect and epidemiological models for tuberculosisLindomar Soares dos Santos 04 July 2013 (has links)
A dinâmica de crescimento populacional de uma espécie é permeada pela relação entre as desvantagens da competição intraespecífica e os benefícios da presença de conspecíficos. Para muitas espécies, os benefícios da cooperação podem superar as desvantagens da competição. A correlação positiva entre tamanho populacional e adaptabilidade em populações muito pequenas é conhecida como efeito Allee demográfico. Apesar de haver modelos matemáticos isolados para os diferentes tipos de efeitos Allee, não há um modelo simples que os abranja e os conecte a modelos de crescimento mais gerais (como o de Richards). Propomos unificar modelos de efeitos Allee e o de crescimento de Richards em um modelo que permita um novo ponto de vista sobre o efeito Allee demográfico. Um exemplo do aumento das possibilidades descritivas de tal generalização é a emergência de mais de uma transição cooperação-competição quando considerado um caso particular desse novo modelo (Allee-Gompertz). Apesar da importância do crescimento populacional, a maioria dos modelos básicos de transmissão de doenças infecciosas considera o tamanho populacional constante ou adota simplificações pouco plausíveis. Nesta tese, mostramos as deficiências de um modelo compartimental dinâmico de tuberculose já consagrado e propomos um novo modelo com crescimento populacional logístico. Quando comparados, nosso modelo apresenta previsões mais pessimistas para a erradicação da doença a longo prazo quando testado com parâmetros que definem políticas de controle pouco eficientes. Realizamos tais predições adotando estratégias de controle de países desenvolvidos e subdesenvolvidos. Visto que esses modelos compartimentais desprezam aspectos espaciais, desenvolvemos uma modelagem computacional de agentes, baseada no modelo proposto, com duas estruturas subjacentes: redes aleatórias e redes reais. A súbita emergência de tuberculose resistente a drogas como consequência de tratamentos ineficazes é também um resultado das implementações desses modelos em dois cenários distintos. Esses resultados são comparados com os do modelo compartimental e com os de um modelo de estrutura subjacente mais simples e, como novo resultado, surge nos dois modelos a possibilidade de erradicação da doença em menos de uma década após o início do tratamento. Esse resultado é possível desde que sejam adotadas estratégias eficientes de controle. / The one-species population growth dynamics is permeated by the relationship between the harms from the intraspecific competition and the benefits from the presence of conspecifics. For many species, the benefits from conspecific cooperation may outweigh the harms from competition. The positive correlation between population size and total fitness in very small population known as demographic Allee effect. Although there are isolated mathematical models for different types of Allee effects, there is not a simple model that covers and connects them to more general growth models (like Richards). We propose to unify models of Allee effects and the Richards growth one in a model that allows a new perspective on the demographic Allee effect. An example of the increased descriptive possibilities of such generalization is the emergence of more than one transition cooperation-competition when considering a particular case of this new model (Gompertz-Allee). Despite the importance of population growth, most basic models of infectious diseases transmission considers population size constant or adopts implausible simplifications. In this thesis, we show the shortcomings of a dynamic compartmental model of tuberculosis already established and we propose a new model with population logistic growth. When compared, our model provides more pessimistic forecasts for the eradication of the disease in the long term if it is tested with parameters that define inefficient control policies. We perform such predictions adopting control strategies from developed and underdeveloped countries. Since these compartmental model disregards spatial aspects, we developed a computational agent model, based on the proposed model, with two underlying structures: random networks and real networks. The sudden emergence of drug-resistant tuberculosis as a result of ineffective treatments is also a result from the implementations of these models in two distinct scenarios. These results are compared with the ones from a compartimental model and with the ones from a model with simpler underlying structure and, as a new result, the possibility of eradicating the disease in less than a decade after beginning the treatment appears on the two models. This result is possible adopting effective control strategies.
|
62 |
Error control with binary cyclic codesGrymel, Martin-Thomas January 2013 (has links)
Error-control codes provide a mechanism to increase the reliability of digital data being processed, transmitted, or stored under noisy conditions. Cyclic codes constitute an important class of error-control code, offering powerful error detection and correction capabilities. They can easily be generated and verified in hardware, which makes them particularly well suited to the practical use as error detecting codes.A cyclic code is based on a generator polynomial which determines its properties including the specific error detection strength. The optimal choice of polynomial depends on many factors that may be influenced by the underlying application. It is therefore advantageous to employ programmable cyclic code hardware that allows a flexible choice of polynomial to be applied to different requirements. A novel method is presented in this thesis to realise programmable cyclic code circuits that are fast, energy-efficient and minimise implementation resources.It can be shown that the correction of a single-bit error on the basis of a cyclic code is equivalent to the solution of an instance of the discrete logarithm problem. A new approach is proposed for computing discrete logarithms; this leads to a generic deterministic algorithm for analysed group orders that equal Mersenne numbers with an exponent of a power of two. The algorithm exhibits a worst-case runtime in the order of the square root of the group order and constant space requirements.This thesis establishes new relationships for finite fields that are represented as the polynomial ring over the binary field modulo a primitive polynomial. With a subset of these properties, a novel approach is developed for the solution of the discrete logarithm in the multiplicative groups of these fields. This leads to a deterministic algorithm for small group orders that has linear space and linearithmic time requirements in the degree of defining polynomial, enabling an efficient correction of single-bit errors based on the corresponding cyclic codes.
|
63 |
Probabilistic studies in number theory and word combinatorics : instances of dynamical analysis / Études probabilistes en théorie des nombres et combinatoire des mots : exemples d’analyse dynamiqueRotondo, Pablo 27 September 2018 (has links)
L'analyse dynamique intègre des outils propres aux systèmes dynamiques (comme l'opérateur de transfert) au cadre de la combinatoire analytique, et permet ainsi l'analyse d'un grand nombre d'algorithmes et objets qu'on peut associer naturellement à un système dynamique. Dans ce manuscrit de thèse, nous présentons, dans la perspective de l'analyse dynamique, l'étude probabiliste de plusieurs problèmes qui semblent à priori bien différents : l'analyse probabiliste de la fonction de récurrence des mots de Sturm, et l'étude probabiliste de l'algorithme du “logarithme continu”. Les mots de Sturm constituent une famille omniprésente en combinatoire des mots. Ce sont, dans un sens précis, les mots les plus simples qui ne sont pas ultimement périodiques. Les mots de Sturm ont déjà été beaucoup étudiés, notamment par Morse et Hedlund (1940) qui en ont exhibé une caractérisation fondamentale comme des codages discrets de droites à pente irrationnelle. Ce résultat relie ainsi les mots de Sturm au système dynamique d'Euclide. Les mots de Sturm n'avaient jamais été étudiés d'un point de vue probabiliste. Ici nous introduisons deux modèles probabilistes naturels (et bien complémentaires) et y analysons le comportement probabiliste (et asymptotique) de la “fonction de récurrence” ; nous quantifions sa valeur moyenne et décrivons sa distribution sous chacun de ces deux modèles : l'un est naturel du point de vue algorithmique (mais original du point de vue de l'analyse dynamique), et l'autre permet naturellement de quantifier des classes de plus mauvais cas. Nous discutons la relation entre ces deux modèles et leurs méthodes respectives, en exhibant un lien potentiel qui utilise la transformée de Mellin. Nous avons aussi considéré (et c'est un travail en cours qui vise à unifier les approches) les mots associés à deux familles particulières de pentes : les pentes irrationnelles quadratiques, et les pentes rationnelles (qui donnent lieu aux mots de Christoffel). L'algorithme du logarithme continu est introduit par Gosper dans Hakmem (1978) comme une mutation de l'algorithme classique des fractions continues. Il calcule le plus grand commun diviseur de deux nombres naturels en utilisant uniquement des shifts binaires et des soustractions. Le pire des cas a été étudié récemment par Shallit (2016), qui a donné des bornes précises pour le nombre d'étapes et a exhibé une famille d'entrées sur laquelle l'algorithme atteint cette borne. Dans cette thèse, nous étudions le nombre moyen d'étapes, tout comme d'autres paramètres importants de l'algorithme. Grâce à des méthodes d'analyse dynamique, nous exhibons des constantes mathématiques précises. Le système dynamique ressemble à première vue à celui d'Euclide, et a été étudié d'abord par Chan (2005) avec des méthodes ergodiques. Cependant, la présence des puissances de 2 dans les quotients change la nature de l'algorithme et donne une nature dyadique aux principaux paramètres de l'algorithme, qui ne peuvent donc pas être simplement caractérisés dans le monde réel.C'est pourquoi nous introduisons un nouveau système dynamique, avec une nouvelle composante dyadique, et travaillons dans ce système à deux composantes, l'une réelle, et l'autre dyadique. Grâce à ce nouveau système mixte, nous obtenons l'analyse en moyenne de l'algorithme. / Dynamical Analysis incorporates tools from dynamical systems, namely theTransfer Operator, into the framework of Analytic Combinatorics, permitting the analysis of numerous algorithms and objects naturally associated with an underlying dynamical system.This dissertation presents, in the integrated framework of Dynamical Analysis, the probabilistic analysis of seemingly distinct problems in a unified way: the probabilistic study of the recurrence function of Sturmian words, and the probabilistic study of the Continued Logarithm algorithm.Sturmian words are a fundamental family of words in Word Combinatorics. They are in a precise sense the simplest infinite words that are not eventually periodic. Sturmian words have been well studied over the years, notably by Morse and Hedlund (1940) who demonstrated that they present a notable number theoretical characterization as discrete codings of lines with irrationalslope, relating them naturally to dynamical systems, in particular the Euclidean dynamical system. These words have never been studied from a probabilistic perspective. Here, we quantify the recurrence properties of a ``random'' Sturmian word, which are dictated by the so-called ``recurrence function''; we perform a complete asymptotic probabilistic study of this function, quantifying its mean and describing its distribution under two different probabilistic models, which present different virtues: one is a naturaly choice from an algorithmic point of view (but is innovative from the point of view of dynamical analysis), while the other allows a natural quantification of the worst-case growth of the recurrence function. We discuss the relation between these two distinct models and their respective techniques, explaining also how the two seemingly different techniques employed could be linked through the use of the Mellin transform. In this dissertation we also discuss our ongoing work regarding two special families of Sturmian words: those associated with a quadratic irrational slope, and those with a rational slope (not properly Sturmian). Our work seems to show the possibility of a unified study.The Continued Logarithm Algorithm, introduced by Gosper in Hakmem (1978) as a mutation of classical continued fractions, computes the greatest common divisor of two natural numbers by performing division-like steps involving only binary shifts and substractions. Its worst-case performance was studied recently by Shallit (2016), who showed a precise upper-bound for the number of steps and gave a family of inputs attaining this bound. In this dissertation we employ dynamical analysis to study the average running time of the algorithm, giving precise mathematical constants for the asymptotics, as well as other parameters of interest. The underlying dynamical system is akin to the Euclidean one, and was first studied by Chan (around 2005) from an ergodic, but the presence of powers of 2 in the quotients ingrains into the central parameters a dyadic flavour that cannot be grasped solely by studying this system. We thus introduce a dyadic component and deal with a two-component system. With this new mixed system at hand, we then provide a complete average-case analysis of the algorithm by Dynamical Analysis.
|
64 |
Elliptic Curve Cryptography for Lightweight Applications.Hitchcock, Yvonne Roslyn January 2003 (has links)
Elliptic curves were first proposed as a basis for public key cryptography in the mid 1980's. They provide public key cryptosystems based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP) , which is so called because of its similarity to the discrete logarithm problem (DLP) over the integers modulo a large prime. One benefit of elliptic curve cryptosystems (ECCs) is that they can use a much shorter key length than other public key cryptosystems to provide an equivalent level of security. For example, 160 bit ECCs are believed to provide about the same level of security as 1024 bit RSA. Also, the level of security provided by an ECC increases faster with key size than for integer based discrete logarithm (dl) or RSA cryptosystems. ECCs can also provide a faster implementation than RSA or dl systems, and use less bandwidth and power. These issues can be crucial in lightweight applications such as smart cards. In the last few years, ECCs have been included or proposed for inclusion in internationally recognized standards. Thus elliptic curve cryptography is set to become an integral part of lightweight applications in the immediate future. This thesis presents an analysis of several important issues for ECCs on lightweight devices. It begins with an introduction to elliptic curves and the algorithms required to implement an ECC. It then gives an analysis of the speed, code size and memory usage of various possible implementation options. Enough details are presented to enable an implementer to choose for implementation those algorithms which give the greatest speed whilst conforming to the code size and ram restrictions of a particular lightweight device. Recommendations are made for new functions to be included on coprocessors for lightweight devices to support ECC implementations Another issue of concern for implementers is the side-channel attacks that have recently been proposed. They obtain information about the cryptosystem by measuring side-channel information such as power consumption and processing time and the information is then used to break implementations that have not incorporated appropriate defences. A new method of defence to protect an implementation from the simple power analysis (spa) method of attack is presented in this thesis. It requires 44% fewer additions and 11% more doublings than the commonly recommended defence of performing a point addition in every loop of the binary scalar multiplication algorithm. The algorithm forms a contribution to the current range of possible spa defences which has a good speed but low memory usage. Another topic of paramount importance to ECCs for lightweight applications is whether the security of fixed curves is equivalent to that of random curves. Because of the inability of lightweight devices to generate secure random curves, fixed curves are used in such devices. These curves provide the additional advantage of requiring less bandwidth, code size and processing time. However, it is intuitively obvious that a large precomputation to aid in the breaking of the elliptic curve discrete logarithm problem (ECDLP) can be made for a fixed curve which would be unavailable for a random curve. Therefore, it would appear that fixed curves are less secure than random curves, but quantifying the loss of security is much more difficult. The thesis performs an examination of fixed curve security taking this observation into account, and includes a definition of equivalent security and an analysis of a variation of Pollard's rho method where computations from solutions of previous ECDLPs can be used to solve subsequent ECDLPs on the same curve. A lower bound on the expected time to solve such ECDLPs using this method is presented, as well as an approximation of the expected time remaining to solve an ECDLP when a given size of precomputation is available. It is concluded that adding a total of 11 bits to the size of a fixed curve provides an equivalent level of security compared to random curves. The final part of the thesis deals with proofs of security of key exchange protocols in the Canetti-Krawczyk proof model. This model has been used since it offers the advantage of a modular proof with reusable components. Firstly a password-based authentication mechanism and its security proof are discussed, followed by an analysis of the use of the authentication mechanism in key exchange protocols. The Canetti-Krawczyk model is then used to examine secure tripartite (three party) key exchange protocols. Tripartite key exchange protocols are particularly suited to ECCs because of the availability of bilinear mappings on elliptic curves, which allow more efficient tripartite key exchange protocols.
|
65 |
Algorithmes pour la factorisation d'entiers et le calcul de logarithme discret / Algorithms for integer factorization and discrete logarithms computationBouvier, Cyril 22 June 2015 (has links)
Dans cette thèse, nous étudions les problèmes de la factorisation d'entier et de calcul de logarithme discret dans les corps finis. Dans un premier temps, nous nous intéressons à l'algorithme de factorisation d'entier ECM et présentons une méthode pour analyser les courbes elliptiques utilisées dans cet algorithme en étudiant les propriétés galoisiennes des polynômes de division. Ensuite, nous présentons en détail l'algorithme de factorisation d'entier NFS, et nous nous intéressons en particulier à l'étape de sélection polynomiale pour laquelle des améliorations d'algorithmes existants sont proposées. Puis, nous présentons les algorithmes NFS-DL et FFS pour le calcul de logarithme discret dans les corps finis. Nous donnons aussi des détails sur deux calculs de logarithme discret effectués durant cette thèse, l'un avec NFS-DL et l'autre avec FFS. Enfin, nous étudions une étape commune à l'algorithme NFS pour la factorisation et aux algorithmes NFS-DL et FFS pour le calcul de logarithme discret: l'étape de filtrage. Nous l'étudions en détail et nous présentons une amélioration dont nous validons l'impact en utilisant des données provenant de plusieurs calculs de factorisation et de logarithme discret / In this thesis, we study the problems of integer factorization and discrete logarithm computation in finite fields. First, we study the ECM algorithm for integer factorization and present a method to analyze the elliptic curves used in this algorithm by studying the Galois properties of division polynomials. Then, we present in detail the NFS algorithm for integer factorization and we study in particular the polynomial selection step for which we propose improvements of existing algorithms. Next, we present two algorithms for computing discrete logarithms in finite fields: NFS-DL and FFS. We also give some details of two computations of discrete logarithms carried out during this thesis, one with NFS-DL and the other with FFS. Finally, we study a common step of the NFS algorithm for integer factorization and the NFS-DL and FFS algorithms for discrete logarithm computations: the filtering step. We study this step thoroughly and present an improvement for which we study the impact using data from several computations of discrete logarithms and factorizations
|
66 |
Explicit Segmentation Of Speech For Indian LanguagesRanjani, H G 03 1900 (has links)
Speech segmentation is the process of identifying the boundaries between words, syllables or phones in the recorded waveforms of spoken natural languages. The lowest level of speech segmentation is the breakup and classification of the sound signal into a string of phones. The difficulty of this problem is compounded by the phenomenon of co-articulation of speech sounds.
The classical solution to this problem is to manually label and segment spectrograms. In the first step of this two step process, a trained person listens to a speech signal, recognizes the word and phone sequence, and roughly determines the position of each phonetic boundary. The second step involves examining several features of the speech signal to place a boundary mark at the point where these features best satisfy a certain set of conditions specific for that kind of phonetic boundary. Manual segmentation of speech into phones is a highly time-consuming and painstaking process. Required for a variety of applications, such as acoustic analysis, or building speech synthesis databases for high-quality speech output systems, the time required to carry out this process for even relatively small speech databases can rapidly accumulate to prohibitive levels. This calls for automating the segmentation process.
The state-of-art segmentation techniques use Hidden Markov Models (HMM) for phone states. They give an average accuracy of over 95% within 20 ms of manually obtained boundaries. However, HMM based methods require large training data for good performance. Another major disadvantage of such speech recognition based segmentation techniques is that they cannot handle very long utterances, Which are necessary for prosody modeling in speech synthesis applications.
Development of Text to Speech (TTS) systems in Indian languages has been difficult till date owing to the non-availability of sizeable segmented speech databases of good quality. Further, no prosody models exist for most of the Indian languages. Therefore, long utterances (at the paragraph level and monologues) have been recorded, as part of this work, for creating the databases.
This thesis aims at automating segmentation of very long speech sentences recorded for the application of corpus-based TTS synthesis for multiple Indian languages. In this explicit segmentation problem, we need to force align boundaries in any utterance from its known phonetic transcription.
The major disadvantage of forcing boundary alignments on the entire speech waveform of a long utterance is the accumulation of boundary errors. To overcome this, we force boundaries between 2 known phones (here, 2 successive stop consonants are chosen) at a time. Here, the approach used is silence detection as a marker for stop consonants. This method gives around 89% (for Hindi database) accuracy and is language independent and training free. These stop consonants act as anchor points for the next stage.
Two methods for explicit segmentation have been proposed. Both the methods rely on the accuracy of the above stop consonant detection stage.
Another common stage is the recently proposed implicit method which uses Bach scale filter bank to obtain the feature vectors. The Euclidean Distance of the Mean of the Logarithm (EDML) of these feature vectors shows peaks at the point where the spectrum changes. The method performs with an accuracy of 87% within 20 ms of manually obtained boundaries and also achieves a low deletion and insertion rate of 3.2% and 21.4% respectively, for 100 sentences of Hindi database.
The first method is a three stage approach. The first is the stop consonant detection stage followed by the next, which uses Quatieri’s sinusoidal model to classify sounds as voiced/unvoiced within 2 successive stop consonants. The final stage uses the EDML function of Bach scale feature vectors to further obtain boundaries within the voiced and unvoiced regions. It gives a Frame Error Rate (FER) of 26.1% for Hindi database.
The second method proposed uses duration statistics of the phones of the language. It again uses the EDML function of Bach scale filter bank to obtain the peaks at the phone transitions and uses the duration statistics to assign probability to each peak being a boundary. In this method, the FER performance improves to 22.8% for the Hindi database.
Both the methods are equally promising for the fact that they give low frame error rates. Results show that the second method outperforms the first, because it incorporates the knowledge of durations.
For the proposed approaches to be useful, manual interventions are required at the output of each stage. However, this intervention is less tedious and reduces the time taken to segment each sentence by around 60% as compared to the time taken for manual segmentation. The approaches have been successfully tested on 3 different languages, 100 sentences each -Kannada, Tamil and English (we have used TIMIT database for validating the algorithms).
In conclusion, a practical solution to the segmentation problem is proposed. Also, the algorithm being training free, language independent (ES-SABSF method) and speaker independent makes it useful in developing TTS systems for multiple languages reducing the segmentation overhead. This method is currently being used in the lab for segmenting long Kannada utterances, spoken by reading a set of 1115 phonetically rich sentences.
|
67 |
Elliptic curve cryptosystem over optimal extension fields for computationally constrained devicesAbu-Mahfouz, Adnan Mohammed 08 June 2005 (has links)
Data security will play a central role in the design of future IT systems. The PC has been a major driver of the digital economy. Recently, there has been a shift towards IT applications realized as embedded systems, because they have proved to be good solutions for many applications, especially those which require data processing in real time. Examples include security for wireless phones, wireless computing, pay-TV, and copy protection schemes for audio/video consumer products and digital cinemas. Most of these embedded applications will be wireless, which makes the communication channel vulnerable. The implementation of cryptographic systems presents several requirements and challenges. For example, the performance of algorithms is often crucial, and guaranteeing security is a formidable challenge. One needs encryption algorithms to run at the transmission rates of the communication links at speeds that are achieved through custom hardware devices. Public-key cryptosystems such as RSA, DSA and DSS have traditionally been used to accomplish secure communication via insecure channels. Elliptic curves are the basis for a relatively new class of public-key schemes. It is predicted that elliptic curve cryptosystems (ECCs) will replace many existing schemes in the near future. The main reason for the attractiveness of ECC is the fact that significantly smaller parameters can be used in ECC than in other competitive system, but with equivalent levels of security. The benefits of having smaller key size include faster computations, and reduction in processing power, storage space and bandwidth. This makes ECC ideal for constrained environments where resources such as power, processing time and memory are limited. The implementation of ECC requires several choices, such as the type of the underlying finite field, algorithms for implementing the finite field arithmetic, the type of the elliptic curve, algorithms for implementing the elliptic curve group operation, and elliptic curve protocols. Many of these selections may have a major impact on overall performance. In this dissertation a finite field from a special class called the Optimal Extension Field (OEF) is chosen as the underlying finite field of implementing ECC. OEFs utilize the fast integer arithmetic available on modern microcontrollers to produce very efficient results without resorting to multiprecision operations or arithmetic using polynomials of large degree. This dissertation discusses the theoretical and implementation issues associated with the development of this finite field in a low end embedded system. It also presents various improvement techniques for OEF arithmetic. The main objectives of this dissertation are to --Implement the functions required to perform the finite field arithmetic operations. -- Implement the functions required to generate an elliptic curve and to embed data on that elliptic curve. -- Implement the functions required to perform the elliptic curve group operation. All of these functions constitute a library that could be used to implement any elliptic curve cryptosystem. In this dissertation this library is implemented in an 8-bit AVR Atmel microcontroller. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted
|
Page generated in 0.0509 seconds