• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 63
  • 41
  • 36
  • 14
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 383
  • 45
  • 45
  • 41
  • 39
  • 29
  • 29
  • 28
  • 26
  • 20
  • 20
  • 20
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

The Strucplot Framework: Visualizing Multi-way Contingency Tables with vcd

Hornik, Kurt, Zeileis, Achim, Meyer, David 10 1900 (has links) (PDF)
This paper describes the "strucplot" framework for the visualization of multi-way contingency tables. Strucplot displays include hierarchical conditional plots such as mosaic, association, and sieve plots, and can be combined into more complex, specialized plots for visualizing conditional independence, GLMs, and the results of independence tests. The framework's modular design allows flexible customization of the plots' graphical appearance, including shading, labeling, spacing, and legend, by means of "graphical appearance control" functions. The framework is provided by the R package vcd.
212

Modifikace metody Pivot Tables pro perzistentní metrické indexování / Modification of Pivot Tables method for persistent metric indexing

Moško, Juraj January 2011 (has links)
The pivot tables is one of the most effective metric access method optimized for a number of distance computations in similarity search. In this work the new modification of the pivot tables method was proposed that is besides distance computations optimized also for a number of I/O operations. Proposed Clustered pivot tables method is indexing clusters of similar objects that were created by another metric access method - the M-tree. The indexing of clustered objects has a positive effect for searching within indexed database. Whereas the clusters are paged in second memory, page containing such cluster, which do not satisfy particular query, is not accessed in second memory at all. Non-relevant objects, that are out of the query range, are not loaded into memory, what has the effect of decreasing number of I/O operations and total volume of transferred data. The correctness of proposed approach was experimentally proved and experimental results of proposed method was compared to selected metric access methods.
213

Statistické usuzování v analýze kategoriálních dat / Statistical inference for categorical data analysis

Kocáb, Jan January 2010 (has links)
This thesis introduces statistical methods for categorical data. These methods are especially used in social sciences such as sociology, psychology and political science, but their importance has increased also in medical and technical sciences. In the first part there is mentioned statistical inference for a proportion. Here is written about classical, exact and Bayesian methods for estimating and hypothesis testing. If we have a large sample then we can approximate exact distribution by normal distribution but if we have a small sample cannot use this approximation and it is necessary to use discrete distribution which makes inference more complicated. The second part deals with two categorical variables analysis in contingency tables. Here are explained measures of association for 2 x 2 contingency tables such as difference of proportion and odds ratio and also presented how we can test independence in the case of large sample and small one. If we have small sample we are not allowed to use classical chi-squared tests and it is necessary to use alternative methods. This part contains variety of exact tests of independence and Bayesian approach for the 2 x 2 table too. In the end of this part there is written about a table for two dependent samples and we are interested whether two variables give identical results which occurs when marginal proportions are equal. In the last part there are methods used on data and discussed results.
214

Testes bayesianos para homogeneidade marginal em tabelas de contingência / Bayesian tests for marginal homogeneity in contingency tables

Carvalho, Helton Graziadei de 06 August 2015 (has links)
O problema de testar hipóteses sobre proporções marginais de uma tabela de contingência assume papel fundamental, por exemplo, na investigação da mudança de opinião e comportamento. Apesar disso, a maioria dos textos na literatura abordam procedimentos para populações independentes, como o teste de homogeneidade de proporções. Existem alguns trabalhos que exploram testes de hipóteses em caso de respostas dependentes como, por exemplo, o teste de McNemar para tabelas 2 x 2. A extensão desse teste para tabelas k x k, denominado teste de homogeneidade marginal, usualmente requer, sob a abordagem clássica, a utilização de aproximações assintóticas. Contudo, quando o tamanho amostral é pequeno ou os dados esparsos, tais métodos podem eventualmente produzir resultados imprecisos. Neste trabalho, revisamos medidas de evidência clássicas e bayesianas comumente empregadas para comparar duas proporções marginais. Além disso, desenvolvemos o Full Bayesian Significance Test (FBST) para testar a homogeneidade marginal em tabelas de contingência bidimensionais e multidimensionais. O FBST é baseado em uma medida de evidência, denominada e-valor, que não depende de resultados assintóticos, não viola o princípio da verossimilhança e respeita a várias propriedades lógicas esperadas para testes de hipóteses. Consequentemente, a abordagem ao problema de teste de homogeneidade marginal pelo FBST soluciona diversas limitações geralmente enfrentadas por outros procedimentos. / Tests of hypotheses for marginal proportions in contingency tables play a fundamental role, for instance, in the investigation of behaviour (or opinion) change. However, most texts in the literature are concerned with tests that assume independent populations (e.g: homogeneity tests). There are some works that explore hypotheses tests for dependent proportions such as the McNemar Test for 2 x 2 contingency tables. The generalization of McNemar test for k x k contingency tables, called marginal homogeneity test, usually requires asymptotic approximations. Nevertheless, for small sample sizes or sparse tables, such methods may occasionally produce imprecise results. In this work, we review some classical and Bayesian measures of evidence commonly applied to compare two marginal proportions. We propose the Full Bayesian Significance Test (FBST) to investigate marginal homogeneity in two-way and multidimensional contingency tables. The FBST is based on a measure of evidence, called e-value, which does not depend on asymptotic results, does not violate the likelihood principle and satisfies logical properties that are expected from hypothesis testing. Consequently, the FBST approach to test marginal homogeneity overcomes several limitations usually met by other procedures.
215

Préparation non paramétrique des données pour la fouille de données multi-tables / Non-parametric data preparation for multi-relational data mining

Lahbib, Dhafer 06 December 2012 (has links)
Dans la fouille de données multi-tables, les données sont représentées sous un format relationnel dans lequel les individus de la table cible sont potentiellement associés à plusieurs enregistrements dans des tables secondaires en relation un-à-plusieurs. Afin de prendre en compte les variables explicatives secondaires (appartenant aux tables secondaires), la plupart des approches existantes opèrent par mise à plat, obtenant ainsi une représentation attribut-valeur classique. Par conséquent, on perd la représentation initiale naturellement compacte mais également on risque d'introduire des biais statistiques. Dans cette thèse, nous nous intéressons à évaluer directement les variables secondaires vis-à-vis de la variable cible, dans un contexte de classification supervisée. Notre méthode consiste à proposer une famille de modèles non paramétriques pour l'estimation de la densité de probabilité conditionnelle des variables secondaires. Cette estimation permet de prendre en compte les variables secondaires dans un classifieur de type Bayésien Naïf. L'approche repose sur un prétraitement supervisé des variables secondaires, par discrétisation dans le cas numérique et par groupement de valeurs dans le cas catégoriel. Dans un premier temps, ce prétraitement est effectué de façon univariée, c'est-à-dire, en considérant une seule variable secondaire à la fois. Dans un second temps, nous proposons une approche de partitionnement multivarié basé sur des itemsets de variables secondaires, ce qui permet de prendre en compte les éventuelles corrélations qui peuvent exister entre variables secondaires. Des modèles en grilles de données sont utilisés pour obtenir des critères Bayésiens permettant d'évaluer les prétraitements considérés. Des algorithmes combinatoires sont proposés pour optimiser efficacement ces critères et obtenir les meilleurs modèles.Nous avons évalué notre approche sur des bases de données multi-tables synthétiques et réelles. Les résultats montrent que les critères d'évaluation ainsi que les algorithmes d'optimisation permettent de découvrir des variables secondaires pertinentes. De plus, le classifieur Bayésien Naïf exploitant les prétraitements effectués permet d'obtenir des taux de prédiction importants. / In multi-relational data mining, data are represented in a relational form where the individuals of the target table are potentially related to several records in secondary tables in one-to-many relationship. In order take into account the secondary variables (those belonging to a non target table), most of the existing approaches operate by propositionalization, thereby losing the naturally compact initial representation and eventually introducing statistical bias. In this thesis, our purpose is to assess directly the relevance of secondary variables w.r.t. the target one, in the context of supervised classification.We propose a family of non parametric models to estimate the conditional density of secondary variables. This estimation provides an extension of the Naive Bayes classifier to take into account such variables. The approach relies on a supervised pre-processing of the secondary variables, through discretization in the numerical case and a value grouping in the categorical one. This pre-processing is achieved in two ways. In the first approach, the partitioning is univariate, i.e. by considering a single secondary variable at a time. In a second approach, we propose an itemset based multivariate partitioning of secondary variables in order to take into account any correlations that may occur between these variables. Data grid models are used to define Bayesian criteria, evaluating the considered pre-processing. Combinatorial algorithms are proposed to efficiently optimize these criteria and find good models.We evaluated our approach on synthetic and real world multi-relational databases. Experiments show that the evaluation criteria and the optimization algorithms are able to discover relevant secondary variables. In addition, the Naive Bayesian classifier exploiting the proposed pre-processing achieves significant prediction rates.
216

Testes bayesianos para homogeneidade marginal em tabelas de contingência / Bayesian tests for marginal homogeneity in contingency tables

Helton Graziadei de Carvalho 06 August 2015 (has links)
O problema de testar hipóteses sobre proporções marginais de uma tabela de contingência assume papel fundamental, por exemplo, na investigação da mudança de opinião e comportamento. Apesar disso, a maioria dos textos na literatura abordam procedimentos para populações independentes, como o teste de homogeneidade de proporções. Existem alguns trabalhos que exploram testes de hipóteses em caso de respostas dependentes como, por exemplo, o teste de McNemar para tabelas 2 x 2. A extensão desse teste para tabelas k x k, denominado teste de homogeneidade marginal, usualmente requer, sob a abordagem clássica, a utilização de aproximações assintóticas. Contudo, quando o tamanho amostral é pequeno ou os dados esparsos, tais métodos podem eventualmente produzir resultados imprecisos. Neste trabalho, revisamos medidas de evidência clássicas e bayesianas comumente empregadas para comparar duas proporções marginais. Além disso, desenvolvemos o Full Bayesian Significance Test (FBST) para testar a homogeneidade marginal em tabelas de contingência bidimensionais e multidimensionais. O FBST é baseado em uma medida de evidência, denominada e-valor, que não depende de resultados assintóticos, não viola o princípio da verossimilhança e respeita a várias propriedades lógicas esperadas para testes de hipóteses. Consequentemente, a abordagem ao problema de teste de homogeneidade marginal pelo FBST soluciona diversas limitações geralmente enfrentadas por outros procedimentos. / Tests of hypotheses for marginal proportions in contingency tables play a fundamental role, for instance, in the investigation of behaviour (or opinion) change. However, most texts in the literature are concerned with tests that assume independent populations (e.g: homogeneity tests). There are some works that explore hypotheses tests for dependent proportions such as the McNemar Test for 2 x 2 contingency tables. The generalization of McNemar test for k x k contingency tables, called marginal homogeneity test, usually requires asymptotic approximations. Nevertheless, for small sample sizes or sparse tables, such methods may occasionally produce imprecise results. In this work, we review some classical and Bayesian measures of evidence commonly applied to compare two marginal proportions. We propose the Full Bayesian Significance Test (FBST) to investigate marginal homogeneity in two-way and multidimensional contingency tables. The FBST is based on a measure of evidence, called e-value, which does not depend on asymptotic results, does not violate the likelihood principle and satisfies logical properties that are expected from hypothesis testing. Consequently, the FBST approach to test marginal homogeneity overcomes several limitations usually met by other procedures.
217

A estatística nas séries iniciais: uma experiência de formação com um grupo colaborativo com professores polivalentes

Veras, Claudio Monteiro 01 June 2010 (has links)
Made available in DSpace on 2016-04-27T16:59:05Z (GMT). No. of bitstreams: 1 Claudio Monteiro Veras.pdf: 4249073 bytes, checksum: 41c68af8035f40748c3fa29064cd929b (MD5) Previous issue date: 2010-06-01 / Secretaria da Educação do Estado de São Paulo / The objective of this research is to investigate the comprehension of a group of classroom teachers in a collaborative group, they teach math in early grades, in relation to the activities of Statistics, considering praxeological organization and specifically the understandings that these teachers have in relation to construction and reading graphs and tables into consideration the levels proposed by Curcio and Wainer. In order to answer the following research question "What are the contributions that training within a collaborative group, brings to the formation of a group of classroom teachers?" We develop a survey, which had 16 primary school teachers attended a collaborative group. The data were collected in five meetings, in which four issues were worked out-situations involving reading and constructing graphs and tables and a meeting of a questionnaire to final diagnoses as our training in the collaborative group. This analysis showed a significant performance by these teachers with regard to reading and constructing graphs and tables, according to the levels proposed by Curcio and Wainer, thereby verified that the group collaboration was effective for the formation of this group of teachers / O objetivo desta pesquisa é investigar a compreensão de um grupo de professores polivalentes em um grupo colaborativo, que ensinam matemática nas séries iniciais, em relação às atividades de Estatística, considerando a Organização Praxeológica e especificamente as compreensões que esses professores apresentam em relação à construção e leitura de gráficos e tabelas considerando os níveis propostos por Curcio e Wainer. Com a finalidade de responder a seguinte questão de pesquisa Quais as contribuições que uma formação, dentro de um grupo colaborativo, traz para a formação de um grupo de professores polivalentes? Para tanto, desenvolvemos uma pesquisa, na qual contou com 16 professores polivalentes na qual participaram de um grupo colaborativo. Os dados coletados foram em cinco encontros, na qual quatro foram trabalhados situações-problemas envolvendo leitura e construção de gráficos e tabelas e um encontro aplicação de um questionário final para diagnosticar como foi nossa formação no grupo colaborativo. Essa análise mostrou um desempenho significativo por parte desses professores no que diz respeito à leitura e construção de gráficos e tabelas, segundo os níveis propostos por Curcio e Wainer, desse modo verificamos que o grupo colaborativo foi eficiente para a formação desse grupo de professores
218

Supervision des réseaux et services pair à pair

Doyen, Guillaume 12 December 2005 (has links) (PDF)
Le modèle pair à pair (P2P) est aujourd'hui utilisé dans des environnements contraints. Néanmoins, pour pouvoir garantir un niveau de service, il requiert l'intégration d'une infrastructure de supervision adaptée. Ce dernier point constitue le cadre de notre travail. Concernant la modélisation de l'information de gestion, nous avons conçu une extension de CIM pour le modèle P2P. Pour la valider, nous l'avons implantée sur Jxta. Nous avons ensuite spécialisé notre modèle de l'information pour les tables de hachage distribuées (DHT). Nous avons abstrait le fonctionnement des DHTs, proposé un ensemble de métriques qui caractérisent leur performance, et déduit un modèle de l'information qui les intègre. Enfin, concernant l'organisation du plan de gestion, nous avons proposé un modèle hiérarchique, qui permet aux pairs de s'organiser selon une arborescence de gestionnaires et d'agents. Cette proposition a été mise en oeuvre sur une implantation de Pastry.
219

Optimisation d'un schéma de codage d'image à base d'une TCD. Application à un codeur JPEG pour l'enregistrement numérique à bas débit

AMMAR, Moussa 14 January 2002 (has links) (PDF)
Nous considérons dans cette thèse le problème dotpimisation dun schéma de codage/décodage JPEG et le post-traitement de réduction des effets de blocs dans les images codées par JPEG.<br />Nous proposons tout dabord le filtrage de Wiener comme optimisation du banc de filtres de synthèse pour une distorsion minimale et nous cherchons par la suite une quantifcation optimisée. Lalgorithme itératif A1 réalise une optimisation conjointe des quantificateurs et du banc de filtres de synthèse. Les résultats experimentaux sur quelques images montrent que le gain total en terme de PSNR peut atteindre 1,36dB et les améliorations visuelles confirment ces résultats.<br />Enfin, nous proposons une nouvelle technique de réduction des effets de blocs basée sur la minimisation de lénergie haute fréquence du bruit de quantification. Lévaluation de lalgorithme B1montre une diminution des effets de blocs, et de nombreuses illustrations permettent dappréhender visuellement les performances de cette méthode.
220

IMPACT DES INSTALLATIONS OSTREICOLES SUR L'HYDRODYNAMIQUE ET LA DYNAMIQUE SEDIMENTAIRE

Kervella, Youen 19 March 2010 (has links) (PDF)
Les structures ostréicoles constituent des obstacles artificiels pouvant perturber l'écoulement des courants de marée et la propagation des vagues, et ainsi modifier le transport sédimentaire. Il en résulte des envasements locaux parfois très prononcés et menaçants pour l'activité ostréicole. L'impact des structures sur les forçages hydrodynamiques, vagues et courants, est donc évalué à différentes échelles spatiales par le biais de mesures in-situ, de modélisation expérimentale et de modélisation numérique. Ces différentes approches ont permis de montrer qu'en champ proche, les courants sont modifiés en termes d'intensité et de direction alors qu'il n'existe pas de modification significative des vagues. A l'échelle d'un parc ostréicole en revanche, un ralentissement significatif des courants et une atténuation importante des hauteurs de vagues sont mesurés. D'un point de vue sédimentaire, nous avons observé sous une table à huîtres une turbidité plus importante ainsi qu'une évolution altimétrique du sédiment moins perturbée qu'à proximité. La diminution des contraintes de cisaillement sur le fond, due à l'atténuation des vagues par le parc, entraîne une réduction du potentiel érosif des vagues, ce qui se traduit par une granulométrie différente et une tendance à l'envasement dans la partie aval du parc vis-à-vis de la propagation des vagues. Une modélisation numérique de la Baie du Mont Saint-Michel a été mise en oeuvre et calibrée par nos mesures in-situ. Elle a montré que l'impact des structures sur la sédimentation à long terme est essentiellement local et est donc négligeable à l'échelle de la baie.

Page generated in 0.0483 seconds