Spelling suggestions: "subject:"istatistical inference"" "subject:"istatistical lnference""
61 |
Definició d'una metodologia experimental per a l'estudi de resultats en sistemes d'aprenentatge artificialMartorell Rodon, Josep Maria 23 November 2007 (has links)
El treball presentat s'emmarca dins del camp d'actuació propi del Grup de Recerca en Sistemes Intel·ligents: l'aprenentatge artificial. Les grans àrees són la computació evolutiva i el raonament basat en casos, tot dirigint la recerca a problemes de classificació, diagnosi i predicció. En tots aquests camps són objecte d'estudi grans conjunts de dades, pels quals es treballen diferents tècniques que en permeten l'extracció de coneixement i l'aplicació als problemes citats. Els grans avenços en aquestes àrees (sovint en forma de nous algorismes) conviuen amb treballs molt parcials sobre les metodologies adequades per a l'avaluació d'aquestes noves propostes. En front d'aquesta situació, la tesi que aquí es presenta proposa un nou marc general per a l'avaluació del comportament d'un conjunt d'M algorismes que, per tal de ser analitzats, són assajats sobre N problemes de prova. La tesi sosté que l'anàlisi habitual que es fa d'aquests resultats és clarament insuficient, i que degut a això les conclusions que s'exposen en els treballs publicats són sovint parcials, i en alguns casos fins i tot errònies.El treball s'inicia amb un estudi introductori sobre les mesures que permeten expressar la bondat d'un algorisme, a través de l'assaig sobre una col·lecció de problemes de prova. En aquest punt, es demostra la necessitat d'un estudi previ de les propietats inherents d'aquests problemes (a partir, per exemple, de les mètriques de complexitat) si es vol assegurar la fiabilitat de les conclusions que s'obtindran.A continuació, es defineix el marc d'aplicació de tot un conjunt de tècniques d'inferència estadística per les quals, essent aquestes prou ben conegudes, s'analitzen els factors a tenir en compte en la determinació del seu domini d'ús. La tesi proposa un protocol general per a l'estudi, des d'un punt de vista estadístic, del comportament d'un conjunt d'algorismes, incloent uns nous models gràfics que en faciliten l'anàlisi, i l'estudi detallat de les propietats inherents als problemes de prova utilitzats. Aquest protocol determina el domini d'ús de les metodologies per a la comparació dels resultats obtinguts en cada problema. La tesi demostra, a més, com aquest domini està directament relacionat amb la capacitat d'aquesta metodologia per a determinar diferències significatives, i també amb la seva replicabilitat.Finalment, es proposen un conjunt de casos sobre resultats ja publicats amb anterioritat, fruit de nous algorismes desenvolupats pel nostre Grup de Recerca, molt en especial en l'aplicació del raonament basat en casos. En tots ells es mostra la correcta aplicació de les metodologies desenvolupades en els capítols anteriors, i es destaquen els errors comesos habitualment, que duen a conclusions no fiables. / El trabajo presentado se enmarca dentro del campo de actuación propio del Grupo de Investigación en Sistemas Inteligentes: el aprendizaje artificial. Las grandes áreas son la computación evolutiva y el razonamiento basado en casos, dirigiendo la investigación a problemas de clasificación, diagnóstico y predicción. En todos estos campos son objeto de estudio grandes conjuntos de datos, para los cuales se trabajan diferentes técnicas que permiten la extracción de conocimiento y la aplicación a los citados problemas. Los grandes avances en estas áreas (muchas veces en forma de nuevos algoritmos) conviven con trabajos muy parciales sobre las metodologías adecuadas para la evaluación de estas nuevas propuestas.Frente a esta situación, la tesis que aquí se presenta propone un nuevo marco general para la evaluación del comportamiento de un conjunto de M algoritmos que, para poder ser analizados, son ensayados sobre N problemas de prueba. La tesis sostiene que el análisis habitual que se hace de estos resultados es claramente insuficiente, i que debido a esto las conclusiones que se exponen en los trabajos publicados son muchas veces parciales, y en algunos casos hasta erróneas.El trabajo se inicia con un estudio introductoria sobre las medidas que permiten expresar la bondad de un algoritmo, a través del ensayo sobre una colección de problemas de prueba. En este punto, se demuestra la necesidad de un estudio previo de las propiedades inherentes de estos problemas (a partir, por ejemplo, de las métricas de complejidad) si se quiere asegurar la fiabilidad de las conclusiones que se obtendrán.A continuación, se define el marco de aplicación de todo un conjunto de técnicas de inferencia estadística para las cuales, siendo éstas bien conocidas, se analizan los factores a tener en cuenta en la determinación de su dominio de uso. La tesis propone un protocolo general para el estudio, desde un punto de vista estadístico, del comportamiento de un conjunto de algoritmos, incluyendo unos nuevos modelos gráficos que facilitan su análisis, y el estudio detallado de las propiedades inherentes a los problemas de prueba utilizados.Este protocolo determina el dominio de uso de las metodologías para la comparación de resultados obtenidos en cada problema. La tesis demuestra, además, como este dominio está directamente relacionado con la capacidad de esta metodología para determinar diferencias significativas, y también con su replicabilidad.Finalmente, se proponen un conjunto de casos sobre resultados ya publicados con anterioridad, fruto de nuevos algoritmos desarrollados por nuestro Grupo de Investigación, muy en especial en la aplicación del razonamiento basado en casos. En todos ellos se muestra la correcta aplicación de las metodologías desarrolladas en los capítulos anteriores, y se destacan los errores cometidos habitualmente, que llevan a conclusiones no fiables. / The present work is all part of the work field of the Research Group in Intelligent Systems: the machine learning. The main areas are the evolutive computation and the case based reasoning, the investigation being focused on the classification, diagnosis and prediction issues. In all of these fields, great groups of data are studied, for which different techniques are applied, enabling the knowledge extraction and the application of the aforementioned problems. The big breakthroughs in these areas (many times in ways of algorithms) coexist with very partial works on suitable methodologies for the evaluation of these new proposals. Before this situation, the thesis herein presented proposes a new general approach for the assessment of a set of M algorithms behaviour which, in order to be analysed, are tested over N datasets. The thesis maintains that the analysis made for these results is clearly insufficient and consequently the conclusions put forward in the works published are very often partial and in some cases even erroneous.This work begins with an introductory study on the measures allowing to express the performance of an algorithm, through the test over a collection of datasets. At this point it is evidenced that a prior study of the inherent properties of these problems (for instance, based on complexity metrics) is needed, in order to assure the reliability of the conclusions that will be drawn. Next, the scope of application of a whole set of well known techniques of statistical inference is defined, for which the factors to be taken into account in the determination of their application analysed. The thesis proposes a general protocol for the study, from a statistical point of view, of the behaviour of a set of algorithms, including new graphic patterns which facilitate its analysis, as well as the detailed study of the inherent properties of the test problems used.This protocol determines the application domains of the methodologies for the comparison of the results obtained in each problem. The thesis demonstrates furthermore how this domain is directly related to the capability of this methodology to determine significant differences, as well as to its replicability.Finally, a set of cases on results already published are proposed, resulting from new algorithms developed by our Research Group, very specially in the application of the case-based reasoning. In all these cases the application of the methodologies developed in the previous chapters is proved to be correct, and the errors incurred in repeatedly, leading to unreliable conclusions, are highlighted.
|
62 |
Rigorous System-level Modeling and Performance Evaluation for Embedded System Design / Modélisation et Évaluation de Performance pour la Conception des Systèmes Embarqués : Approche Rigoureuse au Niveau SystèmeNouri, Ayoub 08 April 2015 (has links)
Les systèmes embarqués ont évolué d'une manière spectaculaire et sont devenus partie intégrante de notre quotidien. En réponse aux exigences grandissantes en termes de nombre de fonctionnalités et donc de flexibilité, les parties logicielles de ces systèmes se sont vues attribuer une place importante malgré leur manque d'efficacité, en comparaison aux solutions matérielles. Par ailleurs, vu la prolifération des systèmes nomades et à ressources limités, tenir compte de la performance est devenu indispensable pour bien les concevoir. Dans cette thèse, nous proposons une démarche rigoureuse et intégrée pour la modélisation et l'évaluation de performance tôt dans le processus de conception. Cette méthode permet de construire des modèles, au niveau système, conformes aux spécifications fonctionnelles, et intégrant les contraintes non-fonctionnelles de l'environnement d'exécution. D'autre part, elle permet d'analyser quantitativement la performance de façon rapide et précise. Cette méthode est guidée par les modèles et se base sur le formalisme $mathcal{S}$BIP que nous proposons pour la modélisation stochastique selon une approche formelle et par composants. Pour construire des modèles conformes au niveau système, nous partons de modèles purement fonctionnels utilisés pour générer automatiquement une implémentation distribuée, étant donnée une architecture matérielle cible et un schéma de répartition. Dans le but d'obtenir une description fidèle de la performance, nous avons conçu une technique d'inférence statistique qui produit une caractérisation probabiliste. Cette dernière est utilisée pour calibrer le modèle fonctionnel de départ. Afin d'évaluer la performance de ce modèle, nous nous basons sur du model checking statistique que nous améliorons à l'aide d'une technique d'abstraction. Nous avons développé un flot de conception qui automatise la majorité des phases décrites ci-dessus. Ce flot a été appliqué à différentes études de cas, notamment à une application de reconnaissance d'image déployée sur la plateforme multi-cœurs STHORM. / In the present work, we tackle the problem of modeling and evaluating performance in the context of embedded systems design. These have become essential for modern societies and experienced important evolution. Due to the growing demand on functionality and programmability, software solutions have gained in importance, although known to be less efficient than dedicated hardware. Consequently, considering performance has become a must, especially with the generalization of resource-constrained devices. We present a rigorous and integrated approach for system-level performance modeling and analysis. The proposed method enables faithful high-level modeling, encompassing both functional and performance aspects, and allows for rapid and accurate quantitative performance evaluation. The approach is model-based and relies on the $mathcal{S}$BIP formalism for stochastic component-based modeling and formal verification. We use statistical model checking for analyzing performance requirements and introduce a stochastic abstraction technique to enhance its scalability. Faithful high-level models are built by calibrating functional models with low-level performance information using automatic code generation and statistical inference. We provide a tool-flow that automates most of the steps of the proposed approach and illustrate its use on a real-life case study for image processing. We consider the design and mapping of a parallel version of the HMAX models algorithm for object recognition on the STHORM many-cores platform. We explored timing aspects and the obtained results show not only the usability of the approach but also its pertinence for taking well-founded decisions in the context of system-level design.
|
63 |
Dirty statistical modelsJalali, Ali, 1982- 11 July 2012 (has links)
In fields across science and engineering, we are increasingly faced with problems where the number of variables or features we need to estimate is much larger than the number of observations. Under such high-dimensional scaling, for any hope of statistically consistent estimation, it becomes vital to leverage any potential structure in the problem such as sparsity, low-rank structure or block sparsity. However, data may deviate significantly from any one such statistical model. The motivation of this thesis is: can we simultaneously leverage more than one such statistical structural model, to obtain consistency in a larger number of problems, and with fewer samples, than can be obtained by single models? Our approach involves combining via simple linear superposition, a technique we term dirty models. The idea is very simple: while any one structure might not capture the data, a superposition of structural classes might. Dirty models thus searches for a parameter that can be decomposed into a number of simpler structures such as (a) sparse plus block-sparse, (b) sparse plus low-rank and (c) low-rank plus block-sparse. In this thesis, we propose dirty model based algorithms for different problems such as multi-task learning, graph clustering and time-series analysis with latent factors. We analyze these algorithms in terms of the number of observations we need to estimate the variables. These algorithms are based on convex optimization and sometimes they are relatively slow. We provide a class of low-complexity greedy algorithms that not only can solve these optimizations faster, but also guarantee the solution. Other than theoretical results, in each case, we provide experimental results to illustrate the power of dirty models. / text
|
64 |
Nonparametric Learning in High DimensionsLiu, Han 01 December 2010 (has links)
This thesis develops flexible and principled nonparametric learning algorithms to explore, understand, and predict high dimensional and complex datasets. Such data appear frequently in modern scientific domains and lead to numerous important applications. For example, exploring high dimensional functional magnetic resonance imaging data helps us to better understand brain functionalities; inferring large-scale gene regulatory network is crucial for new drug design and development; detecting anomalies in high dimensional transaction databases is vital for corporate and government security.
Our main results include a rigorous theoretical framework and efficient nonparametric learning algorithms that exploit hidden structures to overcome the curse of dimensionality when analyzing massive high dimensional datasets. These algorithms have strong theoretical guarantees and provide high dimensional nonparametric recipes for many important learning tasks, ranging from unsupervised exploratory data analysis to supervised predictive modeling. In this thesis, we address three aspects:
1 Understanding the statistical theories of high dimensional nonparametric inference, including risk, estimation, and model selection consistency;
2 Designing new methods for different data-analysis tasks, including regression, classification, density estimation, graphical model learning, multi-task learning, spatial-temporal adaptive learning;
3 Demonstrating the usefulness of these methods in scientific applications, including functional genomics, cognitive neuroscience, and meteorology.
In the last part of this thesis, we also present the future vision of high dimensional and large-scale nonparametric inference.
|
65 |
Graph Structured Normal Means InferenceSharpnack, James 01 May 2013 (has links)
This thesis addresses statistical estimation and testing of signals over a graph when measurements are noisy and high-dimensional. Graph structured patterns appear in applications as diverse as sensor networks, virology in human networks, congestion in internet routers, and advertising in social networks. We will develop asymptotic guarantees of the performance of statistical estimators and tests, by stating conditions for consistency by properties of the graph (e.g. graph spectra). The goal of this thesis is to demonstrate theoretically that by exploiting the graph structure one can achieve statistical consistency in extremely noisy conditions.
We begin with the study of a projection estimator called Laplacian eigenmaps, and find that eigenvalue concentration plays a central role in the ability to estimate graph structured patterns. We continue with the study of the edge lasso, a least squares procedure with total variation penalty, and determine combinatorial conditions under which changepoints (edges across which the underlying signal changes) on the graph are recovered. We will shift focus to testing for anomalous activations in the graph, using the scan statistic relaxations, the spectral scan statistic and the graph ellipsoid scan statistic. We will also show how one can form a decomposition of the graph from a spanning tree which will lead to a test for activity in the graph. This will lead to the construction of a spanning tree wavelet basis, which can be used to localize activations on the graph.
|
66 |
Self-Normalized Sums and Directional ConclusionsJonsson, Fredrik January 2012 (has links)
This thesis consists of a summary and five papers, dealing with self-normalized sums of independent, identically distributed random variables, and three-decision procedures for directional conclusions. In Paper I, we investigate a general set-up for Student's t-statistic. Finiteness of absolute moments is related to the corresponding degree of freedom, and relevant properties of the underlying distribution, assuming independent, identically distributed random variables. In Paper II, we investigate a certain kind of self-normalized sums. We show that the corresponding quadratic moments are greater than or equal to one, with equality if and only if the underlying distribution is symmetrically distributed around the origin. In Paper III, we study linear combinations of independent Rademacher random variables. A family of universal bounds on the corresponding tail probabilities is derived through the technique known as exponential tilting. Connections to self-normalized sums of symmetrically distributed random variables are given. In Paper IV, we consider a general formulation of three-decision procedures for directional conclusions. We introduce three kinds of optimality characterizations, and formulate corresponding sufficiency conditions. These conditions are applied to exponential families of distributions. In Paper V, we investigate the Benjamini-Hochberg procedure as a means of confirming a selection of statistical decisions on the basis of a corresponding set of generalized p-values. Assuming independence, we show that control is imposed on the expected average loss among confirmed decisions. Connections to directional conclusions are given.
|
67 |
Desenvolvimento de método para inferência de características físicas da água associadas às variações espectrais. Caso de Estudo: Reservatório de Itupararanga/SPPereira, Adriana Castreghini de Freitas [UNESP] 28 November 2008 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:30:32Z (GMT). No. of bitstreams: 0
Previous issue date: 2008-11-28Bitstream added on 2014-06-13T19:00:45Z : No. of bitstreams: 1
pereira_acf_dr_prud.pdf: 5065860 bytes, checksum: cf34424bad6a80c07230f80a12e3567a (MD5) / Na sociedade atual, discussões relacionadas à água potável tem ocupado um espaço importante, principalmente no meio científico, onde, através de pesquisas voltadas à disponibilidade e qualidade das águas é possível preparar diagnósticos e apontar soluções para planejadores e tomadores de decisões. Nesse contexto, o objetivo geral do trabalho foi desenvolver um método para inferência de variáveis limnológicas que indicam a qualidade da água e estejam associadas à sua característica espectral, em um reservatório de uso múltiplo e avaliar sua correlação com dados espectrais tomados “in situ” e extraídos de imagens orbitais de satélites de alta resolução espacial. Para tanto, uma imagem multiespectral do satélite Ikonos II foi adquirida, quase simultaneamente a coleta de dados limnológicos e espectrais “in situ”, em pontos amostrados adequadamente no corpo d’água, e posicionados com GPS. Devido à heterogeneidade das condições do tempo no levantamento de campo, uma nova abordagem amostral foi necessária, que se deu pela divisão da amostra em quatro conjuntos, quais foram: conjunto 1 (céu aberto e vento fraco), conjunto 2 (céu aberto e vento de médio a forte), conjunto 3 (céu nublado e vento fraco) e conjunto 4 (céu nublado e vento de médio a forte)... / In current society, drinkable water has been the subject of innumerable debates, mainly in scientific groups, in which, through researches focused on the availability and water quality, it is possible to prepare diagnoses and point out solutions to planners and decision makers. In this context, the general aim of the research was to develop a method for the inference of physical limnological variables that indicate the quality of the water and that are associated to its spectral characteristic, in a multiple use reservoir and evaluate its correlation to spectral data collected in situ and extracted from orbital images of high definition space sattelites. In order to achieve that, a multispectral image of the satellite Ikonos II was acquired, almost simultaneously to the gathering of limnological and spectral data “in situ”, in points sampled adequately in the water surveyed, and positioned by means of GPS. Due to the heterogeneous weather conditions when taking the ground samples, a new sampling approach was necessary, and it occurred with the division of the sample in four settings, which were: setting 1 (clear sky and mildly windy), setting 2 (clear sky and windy), setting 3 (overcast sky and mildly windy) and setting 4 (overcast sky and windy)... (Complete abstract click electronic access below)
|
68 |
Rigorous System-level Modeling and Performance Evaluation for Embedded System Design / Modélisation et Évaluation de Performance pour la Conception des Systèmes Embarqués : Approche Rigoureuse au Niveau SystèmeNouri, Ayoub 08 April 2015 (has links)
Les systèmes embarqués ont évolué d'une manière spectaculaire et sont devenus partie intégrante de notre quotidien. En réponse aux exigences grandissantes en termes de nombre de fonctionnalités et donc de flexibilité, les parties logicielles de ces systèmes se sont vues attribuer une place importante malgré leur manque d'efficacité, en comparaison aux solutions matérielles. Par ailleurs, vu la prolifération des systèmes nomades et à ressources limités, tenir compte de la performance est devenu indispensable pour bien les concevoir. Dans cette thèse, nous proposons une démarche rigoureuse et intégrée pour la modélisation et l'évaluation de performance tôt dans le processus de conception. Cette méthode permet de construire des modèles, au niveau système, conformes aux spécifications fonctionnelles, et intégrant les contraintes non-fonctionnelles de l'environnement d'exécution. D'autre part, elle permet d'analyser quantitativement la performance de façon rapide et précise. Cette méthode est guidée par les modèles et se base sur le formalisme $mathcal{S}$BIP que nous proposons pour la modélisation stochastique selon une approche formelle et par composants. Pour construire des modèles conformes au niveau système, nous partons de modèles purement fonctionnels utilisés pour générer automatiquement une implémentation distribuée, étant donnée une architecture matérielle cible et un schéma de répartition. Dans le but d'obtenir une description fidèle de la performance, nous avons conçu une technique d'inférence statistique qui produit une caractérisation probabiliste. Cette dernière est utilisée pour calibrer le modèle fonctionnel de départ. Afin d'évaluer la performance de ce modèle, nous nous basons sur du model checking statistique que nous améliorons à l'aide d'une technique d'abstraction. Nous avons développé un flot de conception qui automatise la majorité des phases décrites ci-dessus. Ce flot a été appliqué à différentes études de cas, notamment à une application de reconnaissance d'image déployée sur la plateforme multi-cœurs STHORM. / In the present work, we tackle the problem of modeling and evaluating performance in the context of embedded systems design. These have become essential for modern societies and experienced important evolution. Due to the growing demand on functionality and programmability, software solutions have gained in importance, although known to be less efficient than dedicated hardware. Consequently, considering performance has become a must, especially with the generalization of resource-constrained devices. We present a rigorous and integrated approach for system-level performance modeling and analysis. The proposed method enables faithful high-level modeling, encompassing both functional and performance aspects, and allows for rapid and accurate quantitative performance evaluation. The approach is model-based and relies on the $mathcal{S}$BIP formalism for stochastic component-based modeling and formal verification. We use statistical model checking for analyzing performance requirements and introduce a stochastic abstraction technique to enhance its scalability. Faithful high-level models are built by calibrating functional models with low-level performance information using automatic code generation and statistical inference. We provide a tool-flow that automates most of the steps of the proposed approach and illustrate its use on a real-life case study for image processing. We consider the design and mapping of a parallel version of the HMAX models algorithm for object recognition on the STHORM many-cores platform. We explored timing aspects and the obtained results show not only the usability of the approach but also its pertinence for taking well-founded decisions in the context of system-level design.
|
69 |
Formation spontanée de chemins : des fourmis aux marches aléatoires renforcées / Spontaneous paths formation : from ants to reinforced random walkLe Goff, Line 15 December 2014 (has links)
Cette thèse est consacrée à la modélisation de la formation spontanée de chemins préférentiels par des marcheurs déposant des traces attractives sur leurs trajectoires. Plus précisément, par une démarche pluridisciplinaire couplant modélisation et expérimentation, elle vise à dégager un ensemble de règles minimales individuelles permettant l'apparition d'un tel phénomène. Dans ce but, nous avons étudié sous différents angles les modèles minimaux que sont les marches aléatoires renforcées (MAR).Ce travail comporte deux parties principales. La première démontre de nouveaux résultats dans le domaine des probabilités et statistiques. Nous avons généralisé le travail publié par M. Benaïm et O. Raimond en 2010 afin d'étudier l'asymptotique d'une classe de MAR auxquelles les demi-tours sont interdits. Nous avons également développé une procédure statistique permettant, sous certaines conditions adéquates de régularité, d'estimer les paramètres de MAR paramétrées et d'évaluer des marges d'erreur.Dans la seconde partie, sont décrits les résultats et analyses d'une étude comportementale et expérimentale de la fourmi Linepithema humile. Une partie de notre réflexion est centrée sur le rôle et la valeur des paramètres du modèle proposé par J.-L. Deneubourg et al. en 1990. Nous nous sommes aussi demandés dans quelle mesure une MAR peut reproduire les déplacements d'une fourmi dans un réseau. Dans ces objectifs, nous avons mené des expériences confrontant des fourmis à des réseaux à une ou plusieurs bifurcations. Nous avons appliqué aux données expérimentales les outils statistiques développés dans cette thèse. Nous avons aussi effectué une étude comparative entre les simulations de plusieurs modèles et les expériences. / This thesis is devoted to the modelisation of the spontaneous formation of preferential paths by walkers that deposit attractive trails on their trajectories. More precisely, through a multidisciplinary approach, which combines modelisation and experimentation, this thesis aims to bring out a set of minimal individual rules that allow the apparition of this phenomena. In this purpose, we study in several ways the minimal models, which are the Reinforced Random Walks (RRW).This work contains two main parts. The first one proves some new results in the field of probability and statistics. We have generalized the work published by M. Benaïm and O. Raimond in 2010 in order to study the asymptotics of a class of RRW, to which U-turns are forbidden. We developped also a statistical procedure that allows under some appropriate regularity hypotheses to estimate the parameters of parametized RRW and to evaluate margins of error.In the second part, we describe the results and the analyses of a experimental and behavioral study of the Linepithema humile ants. One part of our reflection is centered on the role and the value of the parameters of the model defined by J.-L. Deneubourg et al. in 1990. We investigated also the extent to which RRW could reproduce the moving of an ant in a network. To these purposes, we performed experiments that confront ants to a network of one or several forks. We applied to experimental data the statistical tools developed in this thesis and we performed a comparative study between experiments and simulations of several models.
|
70 |
Inferência no Ensino Médio : uma introdução aos testes de hipóteseConstantino Junior, Paulo Roberto January 2016 (has links)
Orientador: Prof. Dr. André Ricardo Oliveira da Fonseca / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Mestrado Profissional em Matemática em Rede Nacional, 2016. / No mundo contemporâneo é comum constantes pesquisas em diversos âmbitos,
tanto sociais, quanto econômicos, entre outros. Para tais pesquisas é fundamental
a coleta de dados, organizar os dados, como também construir tabelas e gráficos estatísticos,
entretanto é inadmissível não haver uma interpretação consistente sobre os
resultados. Desta forma, o objetivo deste trabalho é introduzir os alunos do Ensino
Médio, especificamente os do terceiro ano, na teoria da inferência estatística, por meio
de atividades experimentais, para que eles possam, num nível elementar, desenvolver
as primeiras compreensões a respeito dos meios de obtenção de uma amostra e das
conclusões possíveis sobre a respectiva população. Assim, estimulando os educandos
em buscar constantemente informações sobre pesquisas estatísticas, as quais estarão
presentes em vários momentos da sua vida em sociedade. / In the contemporary world it is common to come across frequent research from various
scopes, both social and economical amongst others. For such research, it is vital to
collect data, organize it as well as put together statistical charts and graphs. However
it is unacceptable that there is no consistent interpretation about the results. Therefore,
the objective of this work is to introduce High School students, more specifically
the seniors, to the theory of statistical inference, through experimental activities, so
that they can, at an elementary level, develop their primary understanding of both
the means to obtain a sample and the possible conclusions drawn about its respective
population. Thus, we expect to stimulate the students to constantly seek information
about statistical research, which will be present in many different moments in their
lives as part of society.
|
Page generated in 0.0731 seconds