• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 27
  • 21
  • 19
  • 16
  • 15
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

99課綱中「信賴區間」單元之教材設計與學生學習成效評估探討 / On Study Material Design and Students’ Learning Assessment for Confidence Interval Based on the 99 Curriculum

黃聖峯 Unknown Date (has links)
本研究主要是針對高中數學課程中「信賴區間」的這個單元,依據99 課綱中的課程規劃,設計出一套專題式的研究教材,並以筆者所任教高中的高二及高三學生作為研究對象,進行專題性的課程授課,且對其學習成果進行評量。主要研究結果如下: 一、高三學生雖已進行過「信賴區間」及其先備知識之授課,但前測的成績並不理想。 二、高二與高三學生經由筆者授課教學後,其後測成績均較前測成績有非常明顯之進步,不過高二與高三學生的後測成績並無顯著差異。 三、高二自然組與高二社會組學生經由筆者授課教學後,其學習成效亦無顯著差異,但社會組學生學習上普遍較為認真,後測成績稍高於自然組。 四、高三自然組與高三社會組學生經由筆者授課教學後,其後測成績具顯著差異,而進步成績的學習成效亦具有顯著差異,自然組優於社會組。 五、依高中數學學習成就分成高分群、中分群與低分群三群,雖然在前測與後測成績表現上顯著不同,但進步的成績則三群並無顯著差異。 此外,筆者於本次研究中也對學生問卷調查一些筆者有興趣的相關議題,並進行問卷分析,得到以下結果: 一、對於本研究所編撰「信賴區間」之課程教材,學生普遍能夠接受且瞭解,並知曉「信賴區間」在生活上的用處,且能解讀其資訊。唯實務面上,他們對「信賴區間」之學習則持可有可無的態度。 二、本次研究的授課方式對於自然組與社會組學生的接受程度是具有差異的,其中自然組學生較能接受本次非傳統型的授課方式。 三、學生普遍認為高中數學中,「非統計類數學課程」是比較有趣的,「統計類數學課程」則在學習上具相對困難性。而在統計的課程中,「信賴區間」倒是比較感興趣的這單元。 整體而言,本次研究對學生進行信賴區間的教學結果,是具有學習成效的。 / Based on the 99 Curriculum Guidelines for the Senior High School Math, a special set of study material for Confidence Interval was composed. Eleventh and twelfth grade students from a girl’s senior high school were recruited voluntarily and lectured, and their learning performance were evaluated before and after the completion of the lecture. The primary findings are as the following: 1. Though twelfth grade students have already studied Confidence Interval before the lecture, their pre-test scores were still low. 2. On the average, both eleventh and twelfth grade students performed better after the lecture, and no significant differences were observed between them. 3. For the eleventh grade students, no significant differences were observed between social science and natural science groups. However, students in social science group appeared to work harder, and their post-test results were slightly better than those in natural science group. 4. For the twelfth grade students, significant differences were observed between social science and natural science groups. Natural science group students appeared to outperform their counterparts in social science group. 5. Among the top third, the middle third, and the bottom third of all the participating students, although their pre-test and post-test scores differed significantly, the differences between the two tests were not significant. In addition, some secondary issues were also explored, and the related findings are summarized as follows: 1. Students showed appreciation for the study material, understood the concept of Confidence Interval better after the lecture and even realize how to apply the concept to their daily life. Surprisingly, however, they didn’t think learning Confidence Interval would make any difference in their life. 2. Students in the natural science group appeared to have greater acceptance toward the unconventional teaching method than those in the social science group. 3. For the topics covered in senior high school math, students generally considered those unrelated to statistics more interesting, and thought that statistics-related topics were more difficult to learn. However, among the statistics-related topics, Confidence Interval was the most intriguing one. In conclusion, this study reveals that the experimental teaching approach concerning Confidence Interval are apparently positive and effective.
62

Testing Benford’s Law with the first two significant digits

Wong, Stanley Chun Yu 07 September 2010 (has links)
Benford’s Law states that the first significant digit for most data is not uniformly distributed. Instead, it follows the distribution: P(d = d1) = log10(1 + 1/d1) for d1 ϵ {1, 2, …, 9}. In 2006, my supervisor, Dr. Mary Lesperance et. al tested the goodness-of-fit of data to Benford’s Law using the first significant digit. Here we extended the research to the first two significant digits by performing several statistical tests – LR-multinomial, LR-decreasing, LR-generalized Benford, LR-Rodriguez, Cramѐr-von Mises Wd2, Ud2, and Ad2 and Pearson’s χ2; and six simultaneous confidence intervals – Quesenberry, Goodman, Bailey Angular, Bailey Square, Fitzpatrick and Sison. When testing compliance with Benford’s Law, we found that the test statistics LR-generalized Benford, Wd2 and Ad2 performed well for Generalized Benford distribution, Uniform/Benford mixture distribution and Hill/Benford mixture distribution while Pearson’s χ2 and LR-multinomial statistics are more appropriate for the contaminated additive/multiplicative distribution. With respect to simultaneous confidence intervals, we recommend Goodman and Sison to detect deviation from Benford’s Law.
63

Comparação de métodos de construção de haplótipos em estudo de associação genômica ampla com dados simulados / Comparision of haplotypes construction methods in genomic association studies with simulated data

Arce, Cherlynn Daniela da Silva 27 February 2018 (has links)
Submitted by CHERLYNN DANIELA DA SILVA ARCE null (cdprado@outlook.com) on 2018-04-03T20:24:26Z No. of bitstreams: 1 Dissertação_Cherlynn_Daniela_da_Silva_Arce.pdf: 1179630 bytes, checksum: c8a13228e501d97cb1dd118aca364265 (MD5) / Approved for entry into archive by Alexandra Maria Donadon Lusser Segali null (alexmar@fcav.unesp.br) on 2018-04-04T13:21:44Z (GMT) No. of bitstreams: 1 arce_cds_me_jabo.pdf: 1179630 bytes, checksum: c8a13228e501d97cb1dd118aca364265 (MD5) / Made available in DSpace on 2018-04-04T13:21:44Z (GMT). No. of bitstreams: 1 arce_cds_me_jabo.pdf: 1179630 bytes, checksum: c8a13228e501d97cb1dd118aca364265 (MD5) Previous issue date: 2018-02-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Com o avanço dos estudos em genética e da tecnologia aplicada à genotipagem de marcadores moleculares, a identificação de polimorfismos associados às características de interesse econômico se tornou mais acessível, possibilitando a sua utilização aumentar a acurácia de modelos de predição do mérito genético dos animais. Esse avanço também possibilitou aumentar a acurácia dos estudos para identificação de QTLs para características de interesse econômico. Entretanto, os marcadores comumente utilizados para tal fim são os SNPs, que por serem bi-alélicos podem não ser muito eficientes na identificação dos QTLs. Os haplótipos, multi-alélicos, apresentam maior possibilidade de estarem em desequilíbrios de ligação (DL) com os QTLs. Dessa forma, objetivou-se no presente trabalho identificar o melhor método de construção de haplótipos para utilização em estudos de detecção de QTLs, a partir da comparação dos três métodos mais comumente utilizados para este fim. Foram utilizadas três populações simuladas representando características com três diferentes valores de herdabilidade, para as quais foram armazenados os dados fenotípicos, genotípicos e de pedigree dos 6.000 animais da população mais recente: Pop1 com herdabilidade baixa (0,10); Pop2 com herdabilidade moderada (0,25); e, Pop3 com herdabilidade alta (0,35). Os genomas simulados consistiram de 750.000 marcadores do tipo SNP, e 750 QTLs, com dois a quatro alelos, dispostos aleatoriamente em 29 cromossomos com tamanho total de 2.333 centimorgans (cM). A partir da simulação foram eliminados os SNPs cuja frequência do menor alelo foi menor que 0,1, restando 576.027, 577.189 e 576.675 marcadores para as populações Pop1, Pop2 e Pop3, respectivamente. A variação fenotípica foi de 1,0 e a variação dos QTLs foi de 50% das herdabilidades, para cada população. As médias dos DL para cada cromossomo, medidas pela estatística D', variaram de 0,20 até 0,30 para todas as populações, na última geração. Foram construídos haplótipos utilizando três métodos: Intervalo de Confiança (IC), Regra de Quatro Gametas (RQG) e Janelas Sobrepostas (JS). Para Pop1, no cromossomo 15, os métodos IC, RQG e JS identificaram cinco, oito e sete QTLs, respectivamente. Somente um QTL foi identificado nos cromossomos 19 e 29. Para a característica de herdabilidade alta, foi identificado um QTL no cromossomo 11. Em relação às análises de associação utilizando SNPs individuais, foram identificados quatro QTLs no cromossomo 15. Para a característica de herdabilidade moderada, não foram encontrados haplótipos ou SNPs isolados significativos. A metodologia de formação de haplótipos baseado na RQG foi considerada a mais eficiente para detecção de QTLs em relação aos métodos IC e JS, bem como ao uso dos SNPs isolados. / With the advancement of genetic studies and the technology applied to the genotyping of molecular markers, the identification of polymorphisms associated with the characteristics of economic interest became more accessible, allowing its use to increase the accuracy of prediction models of the genetic merit of the animals. This advance also made it possible to increase the accuracy of studies to identify QTLs for characteristics of economic interest. However, the commonly used markers for this purpose are SNPs, which because they are bi-allelic may not be very efficient in identifying QTLs. The haplotypes, multi-allelic, are more likely to be in linkage disequilibrium (LD) with QTLs. Thus, the objective of this work was to identify the best haplotype construction method for use in QTLs detection studies, by comparing the three methods most commonly used for this purpose. Three simulated populations representing characteristics with three different heritability values were used for which the phenotypic, genotypic and pedigree data of the 6,000 animals were stored: Pop1 with low heritability (0.10); Pop2 with moderate heritability (0.25); and, Pop3 with high heritability (0.35). The simulated genomes consisted of 750,000 SNP-type markers, and 750 QTLs, with two to four alleles, arranged randomly on 29 chromosomes with a total size of 2,333 centimorgans (cM). From the simulation the SNPs whose frequency of the lowest allele was less than 0.1 were eliminated, leaving 576,027, 577,189 and 576,675 markers for Pop1, Pop2 and Pop3 populations, respectively. The phenotypic variation was 1.0 and the variation of QTLs was 50% of the heritabilities, for each population. The mean LD for each chromosome, measured by the D' statistic, ranged from 0.20 to 0.30 for all populations in the last generation. Haplotypes were constructed using three methods: Confidence Interval (CI), Four Gametes Rule (FGR) and Sliding-Window (SW). For Pop1, on chromosome 15, CI, FGR and SW methods identified five, eight and seven QTLs, respectively. Only one QTL was identified on chromosomes 19 and 29. For the high heritability characteristic, a QTL was identified on chromosome 11. Regarding the association analyzes using individual SNPs, four QTLs were identified on chromosome 15. For the moderate heritability characteristic, no significant isolated haplotypes or SNPs were found. The methodology of haplotype formation based on the FGR was considered the most efficient for the detection of QTLs in relation to CI and SW methods, as well as to the use of isolated SNPs.
64

[en] VERY SHORT TERM LOAD FORECASTING IN THE NEW BRAZILIAN ELECTRICAL SCENARIO / [es] PREVISIÓN DE CARGA A CORTÍSIMO PLAZO EN EL NUEVO ESCENARIO ELÉCTRICO BRASILERO / [pt] PREVISÃO DE CARGA DE CURTÍSSIMO PRAZO NO NOVO CENÁRIO ELÉTRICO BRASILEIRO

GUILHERME MARTINS RIZZO 19 July 2001 (has links)
[pt] Nesta dissertação é proposto um modelo híbrido para previsão de carga de curtíssimo prazo, combinando amortecimento exponencial simples e redes neurais artificiais do topo feed-forward. O modelo fornece previsões pontuais e limites superiores e inferiores para um horizonte de quinze dias. Estes limites formam um intervalo ao qual pode ser associado um nível de confiança empírico, estimado através de um teste fora da amostra. O desempenho do modelo é avaliado ao longo de uma simulação realizada com dados reais de duas concessionárias de energia elétrica brasileiras. / [en] This thesis presents an hibrid short term load forecasting model that mixes simple exponential smoothing with feed- forward neural networks. The model gives point predictions with upper and lower limits for 15-day-ahead horizon. These limits yields an interval with associated empirical confidence level, estimated by an out of sample test. The model's performance is evaluated through a simulation with real data obtained from two Brazilian utilities. / [es] En esta disertación se propone un modelo híbrido para previsión de carga de cortísimo plazo, combinando amortecimiento exponencial simple y redes neurales artificiales tipo feed-forward. EL modelo nos da las previsiones puntuales y los límites superiores e inferiores para un horizonte de quince días. Estos límites forman un intervalo al cual se le puede asociar un nível de confianza empírico, estimado a través de un test out of sample. EL desempeño del modelo se evalúa utilizando datos reales de dos concesionarias de energía eléctrica brasileras.
65

Towards a flexible statistical modelling by latent factors for evaluation of simulated responses to climate forcings

Fetisova, Ekaterina January 2017 (has links)
In this thesis, using the principles of confirmatory factor analysis (CFA) and the cause-effect concept associated with structural equation modelling (SEM), a new flexible statistical framework for evaluation of climate model simulations against observational data is suggested. The design of the framework also makes it possible to investigate the magnitude of the influence of different forcings on the temperature as well as to investigate a general causal latent structure of temperature data. In terms of the questions of interest, the framework suggested here can be viewed as a natural extension of the statistical approach of 'optimal fingerprinting', employed in many Detection and Attribution (D&amp;A) studies. Its flexibility means that it can be applied under different circumstances concerning such aspects as the availability of simulated data, the number of forcings in question, the climate-relevant properties of these forcings, and the properties of the climate model under study, in particular, those concerning the reconstructions of forcings and their implementation. It should also be added that although the framework involves the near-surface temperature as a climate variable of interest and focuses on the time period covering approximately the last millennium prior to the industrialisation period, the statistical models, included in the framework, can in principle be generalised to any period in the geological past as soon as simulations and proxy data on any continuous climate variable are available.  Within the confines of this thesis, performance of some CFA- and SEM-models is evaluated in pseudo-proxy experiments, in which the true unobservable temperature series is replaced by temperature data from a selected climate model simulation. The results indicated that depending on the climate model and the region under consideration, the underlying latent structure of temperature data can be of varying complexity, thereby rendering our statistical framework, serving as a basis for a wide range of CFA- and SEM-models, a powerful and flexible tool. Thanks to these properties, its application ultimately may contribute to an increased confidence in the conclusions about the ability of the climate model in question to simulate observed climate changes. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Manuscript. Paper 3: Manuscript. Paper 3: Manuscript.</p>
66

Technologie elektrojiskrového drátového řezání / Technology of wire electrical discharge machining

Brázda, Radim January 2013 (has links)
This master´s thesis deals with unconventional technology of wire electrical discharge machining. There are described the principles and essence of electrical discharge machining and the principles related to wire electrical discharge machining with emphasis on the application of this technology in terms of medium-sized engineering company. There is also described the complete assembly of technolgy wire cutting and machining on wire cutter Excetek V 650. Then in the work there are statistically evaluated parameters precision machined surfaces, specifically to the belt pulley 116-8M-130. At the end of the work there is the technical-economic evaluation that addresses the hourly cost of machining on wire cutter Excetek V 650.
67

Utilisation de l’estimateur d’Agresti-Coull dans la construction d’intervalles de confiance bootstrap pour une proportion

Pilotte, Mikaël 10 1900 (has links)
Pour construire des intervalles de confiance, nous pouvons utiliser diverses approches bootstrap. Nous avons un problème pour le contexte spécifique d’un paramètre de proportion lorsque l’estimateur usuel, la proportion de succès dans l’échantillon ˆp, est nul. Dans un contexte classique d’observations indépendantes et identiquement distribuées (i.i.d.) de la distribution Bernoulli, les échantillons bootstrap générés ne contiennent que des échecs avec probabilité 1 et les intervalles de confiance bootstrap deviennent dégénérés en un seul point, soit le point 0. En contexte de population finie, nous sommes confrontés aux mêmes problèmes lorsqu’on applique une méthode bootstrap à un échantillon de la population ne contenant que des échecs. Une solution possible s’inspire de l’estimateur utilisé dans les méthodes de [Wilson, 1927] et [Agresti et Coull, 1998] où ceux-ci considèrent ˜p l’estimateur qui prend la proportion de succès d’un échantillon augmenté auquel on a ajouté deux succès et deux échecs. La solution que nous introduisons consiste à effectuer le bootstrap de la distribution de ˆp mais en appliquant les méthodes bootstrap à l’échantillon augmenté de deux succès et deux échecs, tant en statistique classique que pour une population finie. Les résultats ont démontré qu’une version de la méthode percentile est la méthode bootstrap la plus efficace afin d’estimer par intervalle de confiance un paramètre de proportion autant dans un contexte i.i.d. que dans un contexte d’échantillonnage avec le plan aléatoire simple sans remise. Nos simulations ont également démontré que cette méthode percentile pouvait compétitionner avantageusement avec les meilleures méthodes traditionnelles. / A few bootstrap approaches exist to create confidence intervals. Some difficulties appear for the specific case of a proportion when the usual estimator, the proportion of success in a sample, is 0. In the classical case where the observations are independently and identically distributed (i.i.d.) from a Bernoulli distribution, the bootstrap samples only contain zeros with probability 1 and the resulting bootstrap confidence intervals are degenerate at the value 0. We are facing the same problem in the survey sampling case when we apply the bootstrap method to a sample with all observations equal to 0. A possible solution is suggested by the estimator found in the confidence intervals of [Wilson, 1927] and [Agresti et Coull, 1998] where they use ˜p the proportion of success in a augmented sample consisting of adding two successes and two failures to the original sample. The proposed solution is to use the bootstrap method on ˆp but where the bootstrap is based on the augmented sample with two additional successes and failures, whether the sample comes from i.i.d. Bernoulli variables or from a simple random sample. Results show that a version of the percentile method is the most efficient bootstrap method to construct confidence intervals for a proportion both in the classical setting or in the case of a simple random sample. Our results also show that this percentile interval can compete with the best traditional methods.
68

A Monte Carlo Study of Missing Data Treatments for an Incomplete Level-2 Variable in Hierarchical Linear Models

Kwon, Hyukje 20 July 2011 (has links)
No description available.
69

A comparative study of permutation procedures

Van Heerden, Liske 30 November 1994 (has links)
The unique problems encountered when analyzing weather data sets - that is, measurements taken while conducting a meteorological experiment- have forced statisticians to reconsider the conventional analysis methods and investigate permutation test procedures. The problems encountered when analyzing weather data sets are simulated for a Monte Carlo study, and the results of the parametric and permutation t-tests are compared with regard to significance level, power, and the average coilfidence interval length. Seven population distributions are considered - three are variations of the normal distribution, and the others the gamma, the lognormal, the rectangular and empirical distributions. The normal distribution contaminated with zero measurements is also simulated. In those simulated situations in which the variances are unequal, the permutation test procedure was performed using other test statistics, namely the Scheffe, Welch and Behrens-Fisher test statistics. / Mathematical Sciences / M. Sc. (Statistics)
70

Approche bayésienne de la construction d'intervalles de crédibilité simultanés à partir de courbes simulées

Lapointe, Marc-Élie 07 1900 (has links)
Ce mémoire porte sur la simulation d'intervalles de crédibilité simultanés dans un contexte bayésien. Dans un premier temps, nous nous intéresserons à des données de précipitations et des fonctions basées sur ces données : la fonction de répartition empirique et la période de retour, une fonction non linéaire de la fonction de répartition. Nous exposerons différentes méthodes déjà connues pour obtenir des intervalles de confiance simultanés sur ces fonctions à l'aide d'une base polynomiale et nous présenterons une méthode de simulation d'intervalles de crédibilité simultanés. Nous nous placerons ensuite dans un contexte bayésien en explorant différents modèles de densité a priori. Pour le modèle le plus complexe, nous aurons besoin d'utiliser la simulation Monte-Carlo pour obtenir les intervalles de crédibilité simultanés a posteriori. Finalement, nous utiliserons une base non linéaire faisant appel à la transformation angulaire et aux splines monotones pour obtenir un intervalle de crédibilité simultané valide pour la période de retour. / This master's thesis addresses the problem of the simulation of simultaneous credible intervals in a Bayesian context. First, we will study precipation data and two functions based on these data : the empirical distribution function and the return period, a non-linear function of the empirical distribution. We will review different methods already known to obtain simultaneous confidence intervals of these functions with a polynomial basis and we will present a method to simulate simultaneous credible intervals. Second, we will explore some models of prior distributions and in the more complex one, we will need the Monte-Carlo method to simulate simultaneous posterior credible intervals. Finally, we will use a non-linear basis based on the angular transformation and on monotone splines to obtain valid simultaneous credible intervals for the return period.

Page generated in 0.0944 seconds