Spelling suggestions: "subject:"monotonically"" "subject:"monotonic""
31 |
Recursive Methods in Number Theory, Combinatorial Graph Theory, and ProbabilityBurns, Jonathan 07 July 2014 (has links)
Recursion is a fundamental tool of mathematics used to define, construct, and analyze mathematical objects. This work employs induction, sieving, inversion, and other recursive methods to solve a variety of problems in the areas of algebraic number theory, topological and combinatorial graph theory, and analytic probability and statistics. A common theme of recursively defined functions, weighted sums, and cross-referencing sequences arises in all three contexts, and supplemented by sieving methods, generating functions, asymptotics, and heuristic algorithms.
In the area of number theory, this work generalizes the sieve of Eratosthenes to a sequence of polynomial values called polynomial-value sieving. In the case of quadratics, the method of polynomial-value sieving may be characterized briefly as a product presentation of two binary quadratic forms. Polynomials for which the polynomial-value sieving yields all possible integer factorizations of the polynomial values are called recursively-factorable. The Euler and Legendre prime producing polynomials of the form n2+n+p and 2n2+p, respectively, and Landau's n2+1 are shown to be recursively-factorable. Integer factorizations realized by the polynomial-value sieving method, applied to quadratic functions, are in direct correspondence with the lattice point solutions (X,Y) of the conic sections aX2+bXY +cY2+X-nY=0. The factorization structure of the underlying quadratic polynomial is shown to have geometric properties in the space of the associated lattice point solutions of these conic sections.
In the area of combinatorial graph theory, this work considers two topological structures that are used to model the process of homologous genetic recombination: assembly graphs and chord diagrams. The result of a homologous recombination can be recorded as a sequence of signed permutations called a micronuclear arrangement. In the assembly graph model, each micronuclear arrangement corresponds to a directed Hamiltonian polygonal path within a directed assembly graph. Starting from a given assembly graph, we construct all the associated micronuclear arrangements. Another way of modeling genetic rearrangement is to represent precursor and product genes as a sequence of blocks which form arcs of a circle. Associating matching blocks in the precursor and product gene with chords produces a chord diagram. The braid index of a chord diagram can be used to measure the scope of interaction between the crossings of the chords. We augment the brute force algorithm for computing the braid index to utilize a divide and conquer strategy. Both assembly graphs and chord diagrams are closely associated with double occurrence words, so we classify and enumerate the double occurrence words based on several notions of irreducibility. In the area of analytic probability, moments abstractly describe the shape of a probability distribution. Over the years, numerous varieties of moments such as central moments, factorial moments, and cumulants have been developed to assist in statistical analysis. We use inversion formulas to compute high order moments of various types for common probability distributions, and show how the successive ratios of moments can be used for distribution and parameter fitting. We consider examples for both simulated binomial data and the probability distribution affiliated with the braid index counting sequence. Finally we consider a sequence of multiparameter binomial sums which shares similar properties with the moment sequences generated by the binomial and beta-binomial distributions. This sequence of sums behaves asymptotically like the high order moments of the beta distribution, and has completely monotonic properties.
|
32 |
Lercho ir Selbergo dzeta funkcijų reikšmių pasiskirstymai / Value distribution of Lerch and Selberg zeta-functionsGrigutis, Andrius 27 December 2012 (has links)
Disertaciją sudaro mokslinių tyrimų medžiaga, kurie atlikti 2008 -2012 metais Vilniaus universitete Matematikos ir informatikos fakultete. Disertacijoje įrodomos naujos teoremos apie Lercho ir Selbergo dzeta funkcijų reikšmių pasiskirstymą, atliekami kompiuteriniai skaičiavimai matematine programa MATHEMATICA.
Disertaciją sudaro įvadas, 3 skyriai, išvados ir literatūros sąrašas. Disertacijos rezultatai atspausdinti trijuose moksliniuose straipsniuose, Lietuvos ir užsienio žurnaluose, pristatyti Lietuvoje ir užsienyje vykusiose mokslinėse konferencijose bei katedros seminarų metu.
Pirmajame skyriuje įrodinėjamos ribinės teoremos Lercho dzeta funkcijai. Praėjusio šimtmečio ketvirtame dešimtmetyje Selbergas įrodė, kad tinkamai normuotas Rymano dzeta funkcijos logaritmas ant kritinės tiesės turi standartinį normalųjį pasiskirstymą. Selbergo įrodymas rėmėsi Oilerio sandauga, kuria turi Rymano dzeta funkcija, bet bendru atveju jos neturi Lercho dzeta funkcija.
Antrajame skyriuje įrodoma teorema apie Lercho transcendentinės funkcijos nulių įvertį vertikaliose kompleksinės plokštumos juostose bei atliekami kompiuteriniai nulių skaičiavimai srityje Re(s)>1 programa MATHEMATICA.
Trečiajame skyriuje nagrinėjamos dviejų Selbergo dzeta funkcijų monotoniškumo savybės, kurios yra tiesiogiai susijusios su šių funkcijų nulių išsidėstymu kritinėje juostoje. Monotoniškumo savybės lyginamos su Rymano dzeta funkcijos monotoniškumo savybėmis ir nulių išsidėstymu, kuris yra viena didžiausių... [toliau žr. visą tekstą] / The doctoral dissertation contains the material of scientific investigations done in 2008-2012 in the Faculty of Mathematics and Informatics at Vilnius University. The dissertation includes new theorems for the value distribution of Lerch and Selberg zeta-functions and computer calculations performed using the computational software program MATHEMATICA.
The dissertation consists of the introduction, 3 chapters, the conclusions and the references. The results of the thesis are published in three scientific articles in Lithuanian and foreign journals, reported in scientific conferences in Lithuania and abroad and at the seminars of the department.
In the first chapter, the limit theorems for several cases of the Lerch zeta-functions are proved. In the 1940s, Selberg proved that suitably normalized logarithm of modulus of the Riemann zeta-function on the critical line has a standard normal distribution. Selberg's proof was based on the Euler product; however, in general, Lerch zeta-functions have no Euler product.
In the second chapter, the theorem concerning the zero distribution of the Lerch transendent function is proved, and computer calculations of zeros in the region Re(s)>1 are performed using MATHEMATICA.
In the third chapter, the monotonicity properties of Selberg zeta-functions are investigated. Monotonicity of these two functions is directly related to the location of zeros in the critical strip. The results are compared to the monotonicity... [to full text]
|
33 |
Value distribution of Lerch and Selberg zeta-functions / Lercho ir Selbergo dzeta funkcijų reikšmių pasiskirstymaiGrigutis, Andrius 27 December 2012 (has links)
The doctoral dissertation contains the material of scientific investigations done in 2008-2012 in the Faculty of Mathematics and Informatics at Vilnius University. The dissertation includes new theorems for the value distribution of Lerch and Selberg zeta-functions and computer calculations performed using the computational software program MATHEMATICA.
The dissertation consists of the introduction, 3 chapters, the conclusions and the references. The results of the thesis are published in three scientific articles in Lithuanian and foreign journals, reported in scientific conferences in Lithuania and abroad and at the seminars of the department.
In the first chapter, the limit theorems for several cases of the Lerch zeta-functions are proved. In the 1940s, Selberg proved that suitably normalized logarithm of modulus of the Riemann zeta-function on the critical line has a standard normal distribution. Selberg's proof was based on the Euler product; however, in general, Lerch zeta-functions have no Euler product.
In the second chapter, the theorem concerning the zero distribution of the Lerch transendent function is proved, and computer calculations of zeros in the region Re(s)>1 are performed using MATHEMATICA.
In the third chapter, the monotonicity properties of Selberg zeta-functions are investigated. Monotonicity of these two functions is directly related to the location of zeros in the critical strip. The results are compared to the monotonicity... [to full text] / Disertaciją sudaro mokslinių tyrimų medžiaga, kurie atlikti 2008 -2012 metais Vilniaus universitete Matematikos ir informatikos fakultete. Disertacijoje įrodomos naujos teoremos apie Lercho ir Selbergo dzeta funkcijų reikšmių pasiskirstymą, atliekami kompiuteriniai skaičiavimai matematine programa MATHEMATICA.
Disertaciją sudaro įvadas, 3 skyriai, išvados ir literatūros sąrašas. Disertacijos rezultatai atspausdinti trijuose moksliniuose straipsniuose, Lietuvos ir užsienio žurnaluose, pristatyti Lietuvoje ir užsienyje vykusiose mokslinėse konferencijose bei katedros seminarų metu.
Pirmajame skyriuje įrodinėjamos ribinės teoremos Lercho dzeta funkcijai. Praėjusio šimtmečio ketvirtame dešimtmetyje Selbergas įrodė, kad tinkamai normuotas Rymano dzeta funkcijos logaritmas ant kritinės tiesės turi standartinį normalųjį pasiskirstymą. Selbergo įrodymas rėmėsi Oilerio sandauga, kuria turi Rymano dzeta funkcija, bet bendru atveju jos neturi Lercho dzeta funkcija.
Antrajame skyriuje įrodoma teorema apie Lercho transcendentinės funkcijos nulių įvertį vertikaliose kompleksinės plokštumos juostose bei atliekami kompiuteriniai nulių skaičiavimai srityje Re(s)>1 programa MATHEMATICA.
Trečiajame skyriuje nagrinėjamos dviejų Selbergo dzeta funkcijų monotoniškumo savybės, kurios yra tiesiogiai susijusios su šių funkcijų nulių išsidėstymu kritinėje juostoje. Monotoniškumo savybės lyginamos su Rymano dzeta funkcijos monotoniškumo savybėmis ir nulių išsidėstymu, kuris yra viena didžiausių... [toliau žr. visą tekstą]
|
34 |
Uniformly Area Expanding Flows in SpacetimesXu, Hangjun January 2014 (has links)
<p>The central object of study of this thesis is inverse mean curvature vector flow of two-dimensional surfaces in four-dimensional spacetimes. Being a system of forward-backward parabolic PDEs, inverse mean curvature vector flow equation lacks a general existence theory. Our main contribution is proving that there exist infinitely many spacetimes, not necessarily spherically symmetric or static, that admit smooth global solutions to inverse mean curvature vector flow. Prior to our work, such solutions were only known in spherically symmetric and static spacetimes. The technique used in this thesis might be important to prove the Spacetime Penrose Conjecture, which remains open today. </p><p>Given a spacetime $(N^{4}, \gbar)$ and a spacelike hypersurface $M$. For any closed surface $\Sigma$ embedded in $M$ satisfying some natural conditions, one can ``steer'' the spacetime metric $\gbar$ such that the mean curvature vector field of $\Sigma$ becomes tangential to $M$ while keeping the induced metric on $M$. This can be used to construct more examples of smooth solutions to inverse mean curvature vector flow from smooth solutions to inverse mean curvature flow in a spacelike hypersurface.</p> / Dissertation
|
35 |
Contributions à l'analyse de fiabilité structurale : prise en compte de contraintes de monotonie pour les modèles numériques / Contributions to structural reliability analysis : accounting for monotonicity constraints in numerical modelsMoutoussamy, Vincent 13 November 2015 (has links)
Cette thèse se place dans le contexte de la fiabilité structurale associée à des modèles numériques représentant un phénomène physique. On considère que la fiabilité est représentée par des indicateurs qui prennent la forme d'une probabilité et d'un quantile. Les modèles numériques étudiés sont considérés déterministes et de type boîte-noire. La connaissance du phénomène physique modélisé permet néanmoins de faire des hypothèses de forme sur ce modèle. La prise en compte des propriétés de monotonie dans l'établissement des indicateurs de risques constitue l'originalité de ce travail de thèse. Le principal intérêt de cette hypothèse est de pouvoir contrôler de façon certaine ces indicateurs. Ce contrôle prend la forme de bornes obtenues par le choix d'un plan d'expériences approprié. Les travaux de cette thèse se concentrent sur deux thématiques associées à cette hypothèse de monotonie. La première est l'étude de ces bornes pour l'estimation de probabilité. L'influence de la dimension et du plan d'expériences utilisé sur la qualité de l'encadrement pouvant mener à la dégradation d'un composant ou d'une structure industrielle sont étudiées. La seconde est de tirer parti de l'information de ces bornes pour estimer au mieux une probabilité ou un quantile. Pour l'estimation de probabilité, l'objectif est d'améliorer les méthodes existantes spécifiques à l'estimation de probabilité sous des contraintes de monotonie. Les principales étapes d'estimation de probabilité ont ensuite été adaptées à l'encadrement et l'estimation d'un quantile. Ces méthodes ont ensuite été mises en pratique sur un cas industriel. / This thesis takes place in a structural reliability context which involves numerical model implementing a physical phenomenon. The reliability of an industrial component is summarised by two indicators of failure,a probability and a quantile. The studied numerical models are considered deterministic and black-box. Nonetheless, the knowledge of the studied physical phenomenon allows to make some hypothesis on this model. The original work of this thesis comes from considering monotonicity properties of the phenomenon for computing these indicators. The main interest of this hypothesis is to provide a sure control on these indicators. This control takes the form of bounds obtained by an appropriate design of numerical experiments. This thesis focuses on two themes associated to this monotonicity hypothesis. The first one is the study of these bounds for probability estimation. The influence of the dimension and the chosen design of experiments on the bounds are studied. The second one takes into account the information provided by these bounds to estimate as best as possible a probability or a quantile. For probability estimation, the aim is to improve the existing methods devoted to probability estimation under monotonicity constraints. The main steps built for probability estimation are then adapted to bound and estimate a quantile. These methods have then been applied on an industrial case.
|
36 |
Classes de testes de hipóteses / Classes of hypotheses testsRafael Izbicki 08 June 2010 (has links)
Na Inferência Estatística, é comum, após a realização de um experimento, testar simultaneamente um conjunto de diferentes hipóteses de interesse acerca de um parâmetro desconhecido. Assim, para cada hipótese, realiza-se um teste de hipótese e, a partir disto, conclui-se algo sobre os parâmetros de interesse. O objetivo deste trabalho é avaliar a (falta de) concordância lógica entre as conclusões obtidas a partir dos testes realizados após a observação de um único experimento. Neste estudo, é apresentada uma definição de classe de testes de hipóteses, uma função que para cada hipótese de interesse associa uma função de teste. São então avaliadas algumas propriedades que refletem como gostaríamos que testes para diferentes hipóteses se comportassem em termos de coerência lógica. Tais propriedades são exemplificadas através de classes de testes que as satisfazem. A seguir, consideram-se conjuntos de axiomas para classes. Estes axiomas são baseados nas propriedades mencionadas. Classes de testes usuais são investigadas com relação aos conjuntos de axiomas propostos. São também estudadas propriedades advindas de tais conjuntos de axiomas. Por fim, estuda-se um resultado que estabelece uma espécie de conexão entre testes de hipóteses e estimação pontual. / In Statistical Inference, it is usual, after an experiment is performed, to test simultaneously a set of hypotheses of interest concerning an unknown parameter. Therefore, to each hypothesis, a statistical test is performed and a conclusion about the parameter is drawn based on it. The objective of this work is to evaluate the (lack of) logical coherence among conclusions obtained from tests conducted after the observation of a single experiment. In this study, a definition of class of hypotheses tests, a function that associates a test function to each hypothesis of interest, is presented. Some properties that reflect what one could expect (in terms of logical coherence) from tests to different hypotheses are then evaluated. These properties are exemplified by classes of hypotheses tests that respect them. Then, sets of axioms based on the properties studied are proposed to classes of hypotheses tests. Usual classes of hypotheses tests are investigated with respect to these sets of axioms. Some properties related to these sets of axioms are then analyzed. At last, a result which seems to connect hypotheses testing and point estimation is stated.
|
37 |
Estimation des moindres carrés d'une densité discrète sous contrainte de k-monotonie et bornes de risque. Application à l'estimation du nombre d'espèces dans une population. / Least-squares estimation of a discrete density under constraint of k-monotonicity and risk bounds. Application for the estimation of the number of species in a population.Giguelay, Jade 27 September 2017 (has links)
Cette thèse est une contribution au domaine de l'estimation non-paramétrique sous contrainte de forme. Les fonctions sont discrètes et la forme considérée, appelée k-monotonie, k désignant un entier supérieur à 2, est une généralisation de la convexité. L'entier k constitue un indicateur du degré de creux d'une fonction convexe. Le manuscrit est structuré en trois parties en plus de l'introduction, de la conclusion et d'une annexe.Introduction :L'introduction comprend trois chapitres. Le premier présente un état de l'art de l'estimation de densité sous contrainte de forme. Le second est une synthèse des résultats obtenus au cours de la thèse, disponible en français et en anglais. Enfin, le Chapitre 3 regroupe quelques notations et des résultats mathématiques utilisés au cours du manuscrit.Partie I : Estimation d'une densité discrète sous contrainte de k-monotonieDeux estimateurs des moindres carrés d'une distribution discrète p* sous contrainte de k-monotonie sont proposés. Leur caractérisation est basée sur la décomposition en base de spline des suites k-monotones, et sur les propriétés de leurs primitives. Les propriétés statistiques de ces estimateurs sont étudiées. Leur qualité d'estimation, en particulier, est appréciée. Elle est mesurée en terme d'erreur quadratique, les deux estimateurs convergent à la vitesse paramétrique. Un algorithme dérivé de l'Algorithme de Réduction de Support est implémenté et disponible au R-package pkmon. Une étude sur jeux de données simulés illustre les propriétés de ces estimateurs. Ce travail a été publié dans Electronic Journal of Statistics (Giguelay, 2017).Partie II : Calculs de bornes de risqueDans le premier chapitre de la Partie II, le risque quadratique de l'estimateur des moindres carrés introduit précédemment est borné. Cette borne est adaptative en le sens qu'elle dépend d'un compromis entre la distance de p* à la frontière de l'ensemble des densités k-monotones à support fini, et de la complexité (en terme de décomposition dans la base de spline) des densités appartenant à cet ensemble qui sont suffisamment proches de p*. La méthode est basée sur une formulation variationnelle du risque proposée par Chatterjee (2014) etgénéralisée au cadre de l'estimation de densité. Par la suite, les entropies à crochet des espaces fonctionnels correspondants sont calculées afin de contrôler le supremum de processus empiriques impliqué dans l'erreur quadratique. L'optimalité de la borne de risque est ensuite discutée au regard des résultats obtenus dans le cas continu et dans le cadre de la régression.Dans le second chapitre de la Partie II, des résultats complémentaires sur les entropies à crochet pour les espaces de fonctions k-monotones sont donnés.Partie III : Estimation du nombre d'espèces dans une population et tests de k-monotonieLa dernière partie traite du problème de l'estimation du nombre d'espèces dans une population. La modélisation choisie est celle d'une distribution d'abondance commune à toutes les espèces et définie comme un mélange. La méthode proposée repose sur l'hypothèse de k-monotonie d'abondance. Cette hypothèse permet de rendre le problème de l'estimation du nombre d'espèces identifiable. Deux approches sont proposées. La première est basée sur l'estimateur des moindres carrés sous contrainte de k-monotonie, tandis que la seconde est basée sur l'estimateur empirique. Les deux estimateurs sont comparés sur une étude sur données simulées. L'estimation du nombre d'espèces étant fortement dépendante du degré de k-monotonie choisi dans le modèle, trois procédures de tests multiples sont ensuite proposées pour inférer le degré k directement sur la base des observations. Le niveau et la puissance de ces procédures sont calculés, puis évalués au moyen d'une étude sur jeux de données simulés et la méthode est appliquée sur des jeux de données réels issus de la littérature. / This thesis belongs to the field of nonparametric density estimation under shape constraint. The densities are discrete and the form is k-monotonicity, k>1, which is a generalization of convexity. The integer k is an indicator for the hollow's degree of a convex function. This thesis is composed of three parts, an introduction, a conclusion and an appendix.Introduction :The introduction is structured in three chapters. First Chapter is a state of the art of the topic of density estimation under shape constraint. The second chapter of the introduction is a synthesis of the thesis, available in French and in English. Finally Chapter 3 is a short chapter which summarizes the notations and the classical mathematical results used in the manuscript.Part I : Estimation of a discrete distribution under k-monotonicityconstraintTwo least-square estimators of a discrete distribution p* under constraint of k-monotonicity are proposed. Their characterisation is based on the decomposition on a spline basis of k-monotone sequences, and on the properties of their primitives. Their statistical properties are studied, and in particular their quality of estimation is measured in terms of the quadratic error. They are proved to converge at the parametric rate. An algorithm derived from the support reduction algorithm is implemented in the R-package pkmon. A simulation study illustrates the properties of the estimators. This piece of works, which constitutes Part I of the manuscript, has been published in ElectronicJournal of Statistics (Giguelay, 2017).Part II : Calculation of risks boundsIn the first chapter of Part II, a methodology for calculating riskbounds of the least-square estimator is given. These bounds are adaptive in that they depend on a compromise between the distance of p* on the frontier of the set of k-monotone densities with finite support, and the complexity (linked to the spline decomposition) of densities belonging to this set that are closed to p*. The methodology based on the variational formula of the risk proposed by Chatterjee (2014) is generalized to the framework of discrete k-monotone densities. Then the bracketting entropies of the relevant functionnal space are calculating, leading to control the empirical process involved in the quadratic risk. Optimality of the risk bound is discussed in comparaison with the results previously obtained in the continuous case and for the gaussian regression framework. In the second chapter of Part II, several results concerningbracketting entropies of spaces of k-monotone sequences are presented.Part III : Estimating the number of species in a population and tests of k-monotonicityThe last part deals with the problem of estimating the number ofpresent species in a given area at a given time, based on theabundances of species that have been observed. A definition of ak-monotone abundance distribution is proposed. It allows to relatethe probability of observing zero species to the truncated abundancedistribution. Two approaches are proposed. The first one is based on the Least-Squares estimator under constraint of k-monotonicity, the second oneis based on the empirical distribution. Both estimators are comparedusing a simulation study. Because the estimator of the number ofspecies depends on the value of the degree of monotonicity k, we proposea procedure for choosing this parameter, based on nested testingprocedures. The asymptotic levels and power of the testing procedureare calculated, and the behaviour of the method in practical cases isassessed on the basis of a simulation study.
|
38 |
Monotonicity in shared-memory program verificationKaiser, Alexander January 2013 (has links)
Predicate abstraction is a key enabling technology for applying model checkers to programs written in mainstream languages. It has been used very successfully for debugging sequential system-level C code. Although model checking was originally designed for analysing concurrent systems, there is little evidence of fruitful applications of predicate abstraction to shared-variable concurrent software. The goal of the present thesis is to close this gap. We propose an algorithmic solution implementing predicate abstraction that targets safety properties in non-recursive programs executed by an unbounded number of threads, which communicate via shared memory or higher-level mechanisms, such as mutexes and broadcasts. As system-level code makes frequent use of such primitives, their correct usage is critical to ensure reliability. Monotonicity - the property that thread actions remain executable when other threads are added to the current global state - is a natural and common feature of human-written concurrent software. It is also useful: if every thread’s memory is finite, monotonicity often guarantees the decidability of safety properties even when the number of running threads is unspecified. In this thesis, we show that the process of obtaining finite-data thread abstrac tions for model checking is not always compatible with monotonicity. Predicate-abstracting certain mainstream asynchronous software such as the ticket busy-wait lock algorithm results in non-monotone multi-threaded Boolean programs, despite the monotonicity of the input program: the monotonicity is lost in the abstraction. As a result, the unbounded thread Boolean programs do not give rise to well quasi-ordered systems [1], for which sound and complete safety checking algorithms are available. In fact, safety checking turns out to be undecidable for the obtained class of abstract programs, despite the finiteness of the individual threads’ state spaces. Our solution is to restore the monotonicity in the abstract program, using an inexpensive closure operator that precisely preserves all safety properties from the (non-monotone) abstract program without the closure. As a second contribution, we present a novel, sound and complete, yet empirically much improved algorithm for verifying abstractions, applicable to general well quasi-ordered systems. Our approach is to gradually widen the set of safety queries during the search by program states that involve fewer threads and are thus easier to decide, and are likely to finalise the decision on earlier queries. To counter the negative impact of "bad guesses", i.e. program states that turn out feasible, the search is supported by a parallel engine that generates such states; these are never selected for widening. We present an implementation of our techniques and extensive experiments on multi-threaded C programs, including device driver code from FreeBSD and Solaris. The experiments demonstrate that by exploiting monotonicity, model checking techniques - enabled by predicate abstraction - scale to realistic programs even of a few thousands of multi-threaded C code lines.
|
39 |
Testes de hipóteses em eleições majoritárias / Test of hypothesis in majoritarian electionFossaluza, Victor 16 June 2008 (has links)
O problema de Inferência sobre uma proporção, amplamente divulgado na literatura estatística, ocupa papel central no desenvolvimento das várias teorias de Inferência Estatística e, invariavelmente, é objeto de investigação e discussão em estudos comparativos entre as diferentes escolas de Inferência. Ademais, a estimação de proporções, bem como teste de hipóteses para proporções, é de grande importância para as diversas áreas do conhecimento, constituindo um método quantitativo simples e universal. Nesse trabalho, é feito um estudo comparativo entre as abordagens clássica e bayesiana do problema de testar as hipóteses de ocorrência ou não de 2º turno em um cenário típico de eleição majoritária (maioria absoluta) em dois turnos no Brasil. / The problem of inference about a proportion, widely explored in the statistical literature, plays a key role in the development of several theories of statistical inference and, invariably, is the object of investigation and discussion in comparative studies among different schools of inference. In addition, the estimation of proportions, as well as test of hypothesis for proportions, is very important in many areas of knowledge as it constitutes a simple and universal quantitative method. In this work a comparative study between the Classical and Bayesian approaches to the problem of testing the hypothesis of occurrence of second round (or not) in a typical scenario of a majoritarian election (absolute majority) in two rounds in Brazil is developed.
|
40 |
Concepções sobre limite: imbricações entre obstáculos manifestos por alunos do ensino superiorCelestino, Marcos Roberto 08 October 2008 (has links)
Made available in DSpace on 2016-04-27T16:58:46Z (GMT). No. of bitstreams: 1
Marcos Roberto Celestino.pdf: 14288283 bytes, checksum: e3f75749113b0bfb14e906598cd632b9 (MD5)
Previous issue date: 2008-10-08 / This work is inserted in the research line "History, Epistemology and Teaching of
Mathematics "in the process of teaching and learning of Differential and Integral
Calculus. Its focus is on the conceptions of students of Higher Education levels about
limits and some possible imbrications among epistemological obstacles related to
such concepts. To achieve this goal, we developed our work with numerical
sequences, approaching aspects of convergence and monotonicity, the relationship
between terms such as "to have limits" and "to be limited". Aiming our target, we
developed a set of activities taking into account the results of the researches on the
term limit and epistemological obstacles identified in such researches.The subjects
who took part in our research are university students who had already studied limit of
a function of real variable. The group was made up of students of the fifth semester
of Electrical Engineering course of a private university, located in the east area of
São Paulo. The analysis of the data was made with the aid of the software C.H.I.C.
(Cohesitive Implicative and Hierarchical Classification).The software C.H.I.C. allows
us to extract a set of information, crossing subjects (or objects) and variable (or
attributes), rules of association between variables, to provide a probabilistic index of
quality of association and to represent a structure of variables in the form of
hierarchical classification tree and/or in the form of an implicative graph between
attributes. The analysis of the results of this survey was based on Cornu´s (1983),
Sierpinska´s (1985) and Robert´s (1982) researches.Through the analysis we were
able to identify evidence of possible imbrications between some obstacles and
similarities and dissimilarities among the meanings that the subjects who took part in
our research attributed to the expressions of limit / Este trabalho insere-se na linha de pesquisa História, Epistemologia e Didática da
Matemática no quadro do processo de ensino-aprendizagem do Cálculo Diferencial
e Integral. Tem por objetivo investigar as concepções de alunos do Ensino Superior
sobre limite e possíveis imbricações entre obstáculos epistemológicos relacionados
a essas concepções. Para alcançar esse objetivo, desenvolvemos nosso trabalho
com seqüências numéricas, abordando aspectos sobre a convergência e
monotonicidade, relação entre termos como ter limite e ser limitada . Para esse
fim, elaboramos um conjunto de atividades levando em consideração os resultados
de pesquisas sobre o conceito de limite e os obstáculos epistemológicos
identificados nessas pesquisas. Os sujeitos da pesquisa são alunos universitários
que já estudaram limite de uma função de variável real. O grupo é formado por
alunos do quinto semestre de Engenharia Elétrica de uma Universidade particular,
situada na Zona Leste de São Paulo. A análise dos dados foi feita com auxílio do
software C.H.I.C. (Classificação Hierárquica, Implicativa e Coesitiva). O software
C.H.I.C. permite extrair de um conjunto de informações, cruzando sujeitos (ou
objetos) e variáveis (ou atributos), regras de associação entre variáveis, fornecer um
índice probabilístico de qualidade de associação e de representar uma estruturação
das variáveis na forma de árvore de classificação hierárquica e/ou de grafo
implicativo entre atributos. A análise dos resultados dessa pesquisa fundamentou-se
nas pesquisas de Cornu (1983), Sierpinska (1985) e Robert (1982). A análise
realizada permitiu identificar indícios de possíveis imbricações entre alguns
obstáculos e semelhanças e dessemelhanças entre sentidos, que os sujeitos de
nossa pesquisa atribuíram para expressões utilizadas quando a noção de limite é
estudada
|
Page generated in 0.0952 seconds