• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 19
  • 16
  • 10
  • 7
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 152
  • 46
  • 36
  • 23
  • 22
  • 19
  • 15
  • 15
  • 15
  • 15
  • 15
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Méthodes d'intégration produit pour les équations de Fredholm de deuxième espèce : cas linéaire et non linéaire / Product integration methods for Fredholm integral equations of the second kind : linear case and nonlinear case

Kaboul, Hanane 20 June 2016 (has links)
La méthode d'intégration produit a été proposée pour résoudre des équations linéaires de Fredholm de deuxième espèce singulières dont la solution exacte est régulière, au moins continue. Dans ce travail on adapte cette méthode à des équations dont la solution est juste intégrable. On étudie également son extension au cas non linéaire posé dans l'espace des fonctions intégrables. Ensuite, on propose une autre manière de mettre en oeuvre la méthode d'intégration produit : on commence par linéariser l'équation par une méthode de type Newton puis on discrétise les itérations de Newton par la méthode d'intégration produit / The product integration method has been proposed for solving singular linear Fredholm equations of the second kind whose exact solution is smooth, at least continuous. In this work, we adapt this method to the case where the solution is only integrable. We also study the nonlinear case in the space of integrable functions. Then, we propose a new version of the method in the nonlinear framework : we first linearize the eqaution by a Newton type method and then discretize the Newton iterations by the product integration method
112

Redução no tamanho da amostra de pesquisas de entrevistas domiciliares para planejamento de transportes: uma verificação preliminar / Reduction in sample size of household interview research for transportation planning: a preliminary check

Marcelo Figueiredo Massulo Aguiar 11 August 2005 (has links)
O trabalho tem por principal objetivo verificar, preliminarmente, a possibilidade de reduzir a quantidade de indivíduos na amostra de Pesquisa de Entrevistas Domiciliares, sem prejudicar a qualidade e representatividade da mesma. Analisar a influência das características espaciais e de uso de solo da área urbana constitui o objetivo intermediário. Para ambos os objetivos, a principal ferramenta utilizada foi o minerador de dados denominado Árvore de Decisão e Classificação contido no software S-Plus 6.1, que encontra as relações entre as características socioeconômicas dos indivíduos, as características espaciais e de uso de solo da área urbana e os padrões de viagens encadeadas. Os padrões de viagens foram codificados em termos de sequência cronológica de: motivos, modos, durações de viagem e períodos do dia em que as viagens ocorrem. As análises foram baseadas nos dados da Pesquisa de Entrevistas Domiciliares realizada pela Agência de Cooperação Internacional do Japão e Governo do Estado do Pará em 2000 na Região Metropolitana de Belém. Para se atingir o objetivo intermediário o método consistiu em analisar, através da Árvore de Decisão e Classificação, a influência da variável categórica Macrozona, que representa as características espaciais e de uso de solo da área urbana, nos padrões de viagens encadeadas realizados pelos indivíduos. Para o objetivo principal, o método consistiu em escolher, aleatoriamente, sub-amostras contendo 25% de pessoas da amostra final e verificar, através do Processamento de Árvores de Decisão e Classificação e do teste estatístico Kolmogorov - Smirnov, se os modelos obtidos a partir das amostras reduzidas conseguem ilustrar bem a freqüência de ocorrência dos padrões de viagens das pessoas da amostra final. Concluiu-se que as características espaciais e de uso de solo influenciam os padrões de encadeamento de viagens, e portanto foram incluídas como variáveis preditoras também nos modelos obtidos a partir das sub-amostras. A conclusão principal foi a não rejeição da hipótese de que é possível reduzir o tamanho da amostra de pesquisas domiciliares para fins de estudo do encadeamento de viagens. Entretanto ainda são necessárias muitas outras verificações antes de aceitar esta conclusão. / The main aim of this work is to verify, the possibility of reducing the sample size in home-interview surveys, without being detrimental to the quality and representation. The sub aim of this work is to analyze the influence of spatial characteristics and land use of an urban area. For both aims, the main analyses tool used was Data Miner called the Decision and Classification Tree which is in the software S-Plus 6.1. The Data Miner finds relations between trip chaining patterns and individual socioeconomic characteristics, spatial characteristics and land use patterns. The trip chaining patterns were coded in terms of chronological sequence of trip purpose, travel mode, travel time and the period of day in which travel occurs. The analyses were based on home-interview surveys carried out in the Belém Metropolitan Area in 2000, by Japan International Cooperation Agency and Pará State Government. In order to achieve the sub aim of this work, the method consisted of analyzing, using the Decision and Classification Tree, the influence of the categorical variable \"Macrozona\", which represents spatial characteristics and urban land use patterns, in trip chaining patterns carried by the individuals. Concerning the main aim, the method consisted of choosing sub-samples randomly containing 25% of the final sample of individuals and verifying (using Decision and Classification Tree and Kolmogorov-Smirnov statistical test) whether the models obtained from the reduced samples can describe the frequency of the occurrence of the individuals trip chaining patterns in the final sample well. The first conclusion is that spatial characteristics and land use of the urban area have influenced the trip chaining patterns, and therefore they were also included as independent variables in the models obtained from the sub-samples. The main conclusion was the non-rejection of the hypothesis that it is possible to reduce the sample size in home-interview surveys used for trip-chaining research. Nevertheless, several other verifications are necessary before accepting this conclusion.
113

Information Geometry and the Wright-Fisher model of Mathematical Population Genetics

Tran, Tat Dat 31 July 2012 (has links) (PDF)
My thesis addresses a systematic approach to stochastic models in population genetics; in particular, the Wright-Fisher models affected only by the random genetic drift. I used various mathematical methods such as Probability, PDE, and Geometry to answer an important question: \"How do genetic change factors (random genetic drift, selection, mutation, migration, random environment, etc.) affect the behavior of gene frequencies or genotype frequencies in generations?”. In a Hardy-Weinberg model, the Mendelian population model of a very large number of individuals without genetic change factors, the answer is simple by the Hardy-Weinberg principle: gene frequencies remain unchanged from generation to generation, and genotype frequencies from the second generation onward remain also unchanged from generation to generation. With directional genetic change factors (selection, mutation, migration), we will have a deterministic dynamics of gene frequencies, which has been studied rather in detail. With non-directional genetic change factors (random genetic drift, random environment), we will have a stochastic dynamics of gene frequencies, which has been studied with much more interests. A combination of these factors has also been considered. We consider a monoecious diploid population of fixed size N with n + 1 possible alleles at a given locus A, and assume that the evolution of population was only affected by the random genetic drift. The question is that what the behavior of the distribution of relative frequencies of alleles in time and its stochastic quantities are. When N is large enough, we can approximate this discrete Markov chain to a continuous Markov with the same characteristics. In 1931, Kolmogorov first introduced a nice relation between a continuous Markov process and diffusion equations. These equations called the (backward/forward) Kolmogorov equations which have been first applied in population genetics in 1945 by Wright. Note that these equations are singular parabolic equations (diffusion coefficients vanish on boundary). To solve them, we use generalized hypergeometric functions. To know more about what will happen after the first exit time, or more general, the behavior of whole process, in joint work with J. Hofrichter, we define the global solution by moment conditions; calculate the component solutions by boundary flux method and combinatorics method. One interesting property is that some statistical quantities of interest are solutions of a singular elliptic second order linear equation with discontinuous (or incomplete) boundary values. A lot of papers, textbooks have used this property to find those quantities. However, the uniqueness of these problems has not been proved. Littler, in his PhD thesis in 1975, took up the uniqueness problem but his proof, in my view, is not rigorous. In joint work with J. Hofrichter, we showed two different ways to prove the uniqueness rigorously. The first way is the approximation method. The second way is the blow-up method which is conducted by J. Hofrichter. By applying the Information Geometry, which was first introduced by Amari in 1985, we see that the local state space is an Einstein space, and also a dually flat manifold with the Fisher metric; the differential operator of the Kolmogorov equation is the affine Laplacian which can be represented in various coordinates and on various spaces. Dynamics on the whole state space explains some biological phenomena.
114

Conformidade à lei de Newcomb-Benford de grandezas astronômicas segundo a medida de Kolnogorov-Smirnov

ALENCASTRO JUNIOR, José Vianney Mendonça de 09 September 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-02-21T15:12:08Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_JoséVianneyMendonçaDeAlencastroJr.pdf: 648691 bytes, checksum: f2fbc98e547f0284f5aef34aee9249ca (MD5) / Made available in DSpace on 2017-02-21T15:12:08Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_JoséVianneyMendonçaDeAlencastroJr.pdf: 648691 bytes, checksum: f2fbc98e547f0284f5aef34aee9249ca (MD5) Previous issue date: 2016-09-09 / A lei de Newcomb-Benford, também conhecida como a lei do dígito mais significativo, foi descrita pela primeira vez por Simon Newcomb, sendo apenas embasada estatisticamente após 57 anos pelo físico Frank Benford. Essa lei rege grandezas naturalmente aleatórias e tem sido utilizada por várias áreas como forma de selecionar e validar diversos tipos de dados. Em nosso trabalho tivemos como primeiro objetivo propor o uso de um método substituto ao qui-quadrado, sendo este atualmente o método comumente utilizado pela literatura para verificação da conformidade da Lei de Newcomb-Benford. Fizemos isso pois em uma massa de dados com uma grande quantidade de amostras o método qui-quadrado tende a sofrer de um problema estatístico conhecido por excesso de poder, gerando assim resultados do tipo falso negativo na estatística. Dessa forma propomos a substituição do método qui-quadrado pelo método de Kolmogorov-Smirnov baseado na Função de Distribuição Empírica para análise da conformidade global, pois esse método é mais robusto não sofrendo do excesso de poder e também é mais fiel à definição formal da Lei de Benford, já que o mesmo trabalha considerando as mantissas ao invés de apenas considerar dígitos isolados. Também propomos investigar um intervalo de confiança para o Kolmogorov-Smirnov baseando-nos em um qui-quadrado que não sofre de excesso de poder por se utilizar o Bootstraping. Em dois artigos publicados recentemente, dados de exoplanetas foram analisados e algumas grandezas foram declaradas como conformes à Lei de Benford. Com base nisso eles sugerem que o conhecimento dessa conformidade possa ser usado para uma análise na lista de objetos candidatos, o que poderá ajudar no futuro na identificação de novos exoplanetas nesta lista. Sendo assim, um outro objetivo de nosso trabalho foi explorar diversos bancos e catálogos de dados astronômicos em busca de grandezas, cuja a conformidade à lei do dígito significativo ainda não seja conhecida a fim de propor aplicações práticas para a área das ciências astronômicas. / The Newcomb-Benford law, also known as the most significant digit law, was described for the first time by astronomer and mathematician Simon Newcomb. This law was just statistically grounded after 57 years after the Newcomb’s discovery. This law governing naturally random greatness and, has been used by many knowledge areas to validate several kind of data. In this work, the first goal is propose a substitute of qui-square method. The qui-square method is the currently method used in the literature to verify the Newcomb-Benford Law’s conformity. It’s necessary because in a greatness with a big quantity of samples, the qui-square method can has false negatives results. This problem is named Excess of Power. Because that, we proposed to use the Kolmogorov-Smirnov method based in Empirical Distribution Function (EDF) to global conformity analysis. Because this method is more robust and not suffering of the Excess of Power problem. The Kolmogorov-Smirnov method also more faithful to the formal definition of Benford’s Law since the method working considering the mantissas instead of single digits. We also propose to invetigate a confidence interval for the Kolmogorov-Smirnov method based on a qui-square with Bootstrapping strategy which doesn’t suffer of Excess of Power problem. Recently, two papers were published. I this papaers exoplanets data were analysed and some greatness were declared conform to a Newcomb-Benford distribution. Because that, the authors suggest that knowledge of this conformity can be used for help in future to indentify new exoplanets in the candidates list. Therefore, another goal of this work is explorer a several astronomicals catalogs and database looking for greatness which conformity of Benford’s law is not known yet. And after that , the authors suggested practical aplications for astronomical sciences area.
115

Inégalités de Landau-Kolmogorov dans des espaces de Sobolev / Landau-Kolmogorov inequalities in Sobolev spaces

Abbas, Lamia 18 February 2012 (has links)
Ce travail est dédié à l’étude des inégalités de type Landau-Kolmogorov en normes L2. Les mesures utilisées sont celles d’Hermite, de Laguerre-Sonin et de Jacobi. Ces inégalités sont obtenues en utilisant une méthode variationnelle. Elles font intervenir la norme d’un polynômes p et celles de ces dérivées. Dans un premier temps, on s'intéresse aux inégalités en une variable réelle qui font intervenir un nombre quelconque de normes. Les constantes correspondantes sont prises dans le domaine où une certaine forme bilinéaire est définie positive. Ensuite, on généralise ces résultats aux polynômes à plusieurs variables réelles en utilisant le produit tensoriel dans L2 et en faisant intervenir au plus les dérivées partielles secondes. Pour les mesures d'Hermite et de Laguerre-Sonin, ces inégalités sont étendues à toutes les fonctions d'un espace de Sobolev. Pour la mesure de Jacobi on donne des inégalités uniquement pour les polynômes d'un degré fixé par rapport à chaque variable. / This thesis is devoted to Landau-Kolmogorov type inequalities in L2 norm. The measures which are used, are the Hermite, the Laguerre-Sonin and the Jacobi ones. These inequalities are obtained by using a variational method and the involved the square norms of a polynomial p and some of its derivatives. Initially, we focused on inequalities in one real variable that involve any number of norms. The corresponding constants are taken in the domain where a certain biblinear form is positive definite. Then we generalize these results to polynomials in several real variables using the tensor product in L2 and involving at most the second partial derivatives. For the Hermite and Laguerrre-Sonin cases, these inequalities are extended to all functions of a Sobolev space. For the Jacobi case inequalities are given only for polynomials of degree fixed with respect to each variable.
116

Information Geometry and the Wright-Fisher model of Mathematical Population Genetics

Tran, Tat Dat 04 July 2012 (has links)
My thesis addresses a systematic approach to stochastic models in population genetics; in particular, the Wright-Fisher models affected only by the random genetic drift. I used various mathematical methods such as Probability, PDE, and Geometry to answer an important question: \"How do genetic change factors (random genetic drift, selection, mutation, migration, random environment, etc.) affect the behavior of gene frequencies or genotype frequencies in generations?”. In a Hardy-Weinberg model, the Mendelian population model of a very large number of individuals without genetic change factors, the answer is simple by the Hardy-Weinberg principle: gene frequencies remain unchanged from generation to generation, and genotype frequencies from the second generation onward remain also unchanged from generation to generation. With directional genetic change factors (selection, mutation, migration), we will have a deterministic dynamics of gene frequencies, which has been studied rather in detail. With non-directional genetic change factors (random genetic drift, random environment), we will have a stochastic dynamics of gene frequencies, which has been studied with much more interests. A combination of these factors has also been considered. We consider a monoecious diploid population of fixed size N with n + 1 possible alleles at a given locus A, and assume that the evolution of population was only affected by the random genetic drift. The question is that what the behavior of the distribution of relative frequencies of alleles in time and its stochastic quantities are. When N is large enough, we can approximate this discrete Markov chain to a continuous Markov with the same characteristics. In 1931, Kolmogorov first introduced a nice relation between a continuous Markov process and diffusion equations. These equations called the (backward/forward) Kolmogorov equations which have been first applied in population genetics in 1945 by Wright. Note that these equations are singular parabolic equations (diffusion coefficients vanish on boundary). To solve them, we use generalized hypergeometric functions. To know more about what will happen after the first exit time, or more general, the behavior of whole process, in joint work with J. Hofrichter, we define the global solution by moment conditions; calculate the component solutions by boundary flux method and combinatorics method. One interesting property is that some statistical quantities of interest are solutions of a singular elliptic second order linear equation with discontinuous (or incomplete) boundary values. A lot of papers, textbooks have used this property to find those quantities. However, the uniqueness of these problems has not been proved. Littler, in his PhD thesis in 1975, took up the uniqueness problem but his proof, in my view, is not rigorous. In joint work with J. Hofrichter, we showed two different ways to prove the uniqueness rigorously. The first way is the approximation method. The second way is the blow-up method which is conducted by J. Hofrichter. By applying the Information Geometry, which was first introduced by Amari in 1985, we see that the local state space is an Einstein space, and also a dually flat manifold with the Fisher metric; the differential operator of the Kolmogorov equation is the affine Laplacian which can be represented in various coordinates and on various spaces. Dynamics on the whole state space explains some biological phenomena.
117

Modélisation cognitive de la pertinence narrative en vue de l'évaluation et de la génération de récits / Cognitive modeling of narrative relevance : towards the evaluation and the generation of stories

Saillenfest, Antoine 25 November 2015 (has links)
Une part importante de l’activité de communication humaine est dédiée au récit d’événements (fictifs ou non). Ces récits doivent être cohérents et intéressants pour être pertinents. Dans le domaine de la génération automatique de récits, la question de l’intérêt a souvent été négligée, ou traitée via l’utilisation de méthodes ad hoc, au profit de la cohérence des structures narratives produites. Nous proposons d’aborder le processus de création des récits sous l’angle de la modélisation quantitative de critères de pertinence narrative via l’application d’un modèle cognitif de l’intérêt événementiel. Nous montrerons que cet effort de modélisation peut servir de guide pour concevoir un modèle cognitivement plausible de génération de narrations. / Humans devote a considerable amount of time to producing narratives. Whatever a story is used for (whether to entertain or to teach), it must be relevant. Relevant stories must be believable and interesting. The field of computational generation of narratives has explored many ways of generating narratives, especially well-formed and understandable ones. The question of what makes a story interesting has however been largely ignored or barely addressed. Only some specific aspects of narrative interest have been considered. No general theoretical framework that would serve as guidance for the generation of interesting and believable narratives has been provided. The aim of this thesis is to introduce a cognitive model of situational interest and use it to offer formal criteria to decide to what extent a story is relevant. Such criteria could guide the development of a cognitively plausible model of story generation.
118

SAND, un protocole de chiffrement symétrique incompressible à structure simple

Baril-Robichaud, Patrick 09 1900 (has links)
Nous avons développé un cryptosystème à clé symétrique hautement sécuritaire qui est basé sur un réseau de substitutions et de permutations. Il possède deux particularités importantes. Tout d'abord, il utilise de très grandes S-Boxes incompressibles dont la taille peut varier entre 256 Kb et 32 Gb bits d'entrée et qui sont générées aléatoirement. De plus, la phase de permutation est effectuée par un ensemble de fonctions linéaires choisies aléatoirement parmi toutes les fonctions linéaires possibles. Chaque fonction linéaire est appliquée sur tous les bits du bloc de message. Notre protocole possède donc une structure simple qui garantit l'absence de portes dérobées. Nous allons expliquer que notre cryptosystème résiste aux attaques actuellement connues telles que la cryptanalyse linéaire et la cryptanalyse différentielle. Il est également résistant à toute forme d'attaque basée sur un biais en faveur d'une fonction simple des S-Boxes. / We developed a new symmetric-key algorithm that is highly secure. Our algorithm is SPN-like but with two main particularities. First of all, we use very large random incompressible s-boxes. The input size of our s-boxes vary between 256 Kb and 32 Gb.Secondly, for the permutation part of the algorithm, we use a set of random linear functions chosen uniformly and randomly between every possible fonctions. The input of these functions is all the bits of the block of messages to encode. Our system has a very simple structure that guarantees that there are no trap doors in it. We will explain how our algorithm is resistant to the known attacks, such as linear and differential cryptanalysis. It is also resistant to any attack based on a bias of the s-boxes to a simple function.
119

Míry kvality klasifikačních modelů a jejich převod / Quality measures of classification models and their conversion

Hanusek, Lubomír January 2003 (has links)
Predictive power of classification models can be evaluated by various measures. The most popular measures in data mining (DM) are Gini coefficient, Kolmogorov-Smirnov statistic and lift. These measures are each based on a completely different way of calculation. If an analyst is used to one of these measures it can be difficult for him to asses the predictive power of a model evaluated by another measure. The aim of this thesis is to develop a method how to convert one performance measure into another. Even though this thesis focuses mainly on the above-mentioned measures, it deals also with other measures like sensitivity, specificity, total accuracy and area under ROC curve. During development of DM models you may need to work with a sample that is stratified by values of the target variable Y instead of working with the whole population containing millions of observations. If you evaluate a model developed on a stratified data you may need to convert these measures to the whole population. This thesis describes a way, how to carry out this conversion. A software application (CPM) enabling all these conversions makes part of this thesis. With this application you can not only convert one performance measure to another, but you can also convert measures calculated on a stratified sample to the whole population. Besides the above mentioned performance measures (sensitivity, specificity, total accuracy, Gini coefficient, Kolmogorov-Smirnov statistic), CPM will also generate confusion matrix and performance charts (lift chart, gains chart, ROC chart and KS chart). This thesis comprises the user manual to this application as well as the web address where the application can be downloaded. The theory described in this thesis was verified on the real data.
120

A Natural Interpretation of Classical Proofs

Brage, Jens January 2006 (has links)
<p>In this thesis we use the syntactic-semantic method of constructive type theory to give meaning to classical logic, in particular Gentzen's LK.</p><p>We interpret a derivation of a classical sequent as a derivation of a contradiction from the assumptions that the antecedent formulas are true and that the succedent formulas are false, where the concepts of truth and falsity are taken to conform to the corresponding constructive concepts, using function types to encode falsity. This representation brings LK to a manageable form that allows us to split the succedent rules into parts. In this way, every succedent rule gives rise to a natural deduction style introduction rule. These introduction rules, taken together with the antecedent rules adapted to natural deduction, yield a natural deduction calculus whose subsequent interpretation in constructive type theory gives meaning to classical logic.</p><p>The Gentzen-Prawitz inversion principle holds for the introduction and elimination rules of the natural deduction calculus and allows for a corresponding notion of convertibility. We take the introduction rules to determine the meanings of the logical constants of classical logic and use the induced type-theoretic elimination rules to interpret the elimination rules of the natural deduction calculus. This produces an interpretation injective with respect to convertibility, contrary to an analogous translation into intuitionistic predicate logic.</p><p>From the interpretation in constructive type theory and the interpretation of cut by explicit substitution, we derive a full precision contraction relation for a natural deduction version of LK. We use a term notation to formalize the contraction relation and the corresponding cut-elimination procedure.</p><p>The interpretation can be read as a Brouwer-Heyting-Kolmogorov (BHK) semantics that justifies classical logic. The BHK semantics utilizes a notion of classical proof and a corresponding notion of classical truth akin to Kolmogorov's notion of pseudotruth. We also consider a second BHK semantics, more closely connected with Kolmogorov's double-negation translation.</p><p>The first interpretation reinterprets the consequence relation while keeping the constructive interpretation of truth, whereas the second interpretation reinterprets the notion of truth while keeping the constructive interpretation of the consequence relation. The first and second interpretations act on derivations in much the same way as Plotkin's call-by-value and call-by-name continuation-passing-style translations, respectively.</p><p>We conclude that classical logic can be given a constructive semantics by laying down introduction rules for the classical logical constants. This semantics constitutes a proof interpretation of classical logic.</p>

Page generated in 0.0327 seconds