• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6107
  • 117
  • 29
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 6107
  • 1054
  • 1034
  • 818
  • 815
  • 621
  • 580
  • 571
  • 517
  • 437
  • 427
  • 411
  • 376
  • 371
  • 366
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Consecutive patterns and statistics on restricted permutations

Elizalde Torrent, Sergi 16 July 2004 (has links)
El tema d'aquesta tesi és l'enumeració de permutacions amb subseqüències prohibides respecte a certs estadístics, i l'enumeració de permutacions que eviten subseqüències generalitzades.Després d'introduir algunes definicions sobre subseqüències i estadístics en permutacions i camins de Dyck, comencem estudiant la distribució dels estadístics -nombre de punts fixos' i -nombre d'excedències' en permutacions que eviten una subseqüència de longitud 3. Un dels resultats principals és que la distribució conjunta d'aquest parell de paràmetres és la mateixa en permutacions que eviten 321 que en permutacions que eviten 132. Això generalitza un teorema recent de Robertson, Saracino i Zeilberger. Demostrem aquest resultat donant una bijecció que preserva els dos estadístics en qüestió i un altre paràmetre. La idea clau consisteix en introduir una nova classe d'estadístics en camins de Dyck, basada en el que anomenem túnel.A continuació considerem el mateix parell d'estadístics en permutacions que eviten simultàniament dues o més subseqüències de longitud 3. Resolem tots els casos donant les funcions generadores corresponents. Alguns casos són generalitzats a subseqüències de longitud arbitrària. També descrivim la distribució d'aquests paràmetres en involucions que eviten qualsevol subconjunt de subseqüències de longitud 3. La tècnica principal consisteix en fer servir bijeccions entre permutacions amb subseqüències prohibides i certs tipus de camins de Dyck, de manera que els estadístics en permutacions que considerem corresponen a estadístics en camins de Dyck que són més fàcils d'enumerar.Tot seguit presentem una nova família de bijeccions del conjunt de camins de Dyck a sí mateix, que envien estadístics que apareixen en l'estudi de permutacions amb subseqüències prohibides a estadístics clàssics en camins de Dyck, la distribució dels quals s'obté fàcilment. En particular, això ens dóna una prova bijectiva senzilla de l'equidistribució de punts fixos en les permutacions que eviten 321 i en les que eviten 132. A continuació donem noves interpretacions dels nombres de Catalan i dels nombres de Fine. Considerem una classe de permutacions definida en termes d'aparellaments de 2n punts en una circumferència sense creuaments. N'estudiem l'estructura i algunes propietats, i donem la distribució de diversos estadístics en aquests permutacions.En la següent part de la tesi introduïm una noció diferent de subseqüències prohibides, amb el requeriment que els elements que formen la subseqüència han d'aparèixer en posicions consecutives a la permutació. Més en general, estudiem la distribució del nombre d'ocurrències de subparaules (subseqüències consecutives) en permutacions. Resolem el problema en diversos casos segons la forma de la subparaula, obtenint-ne les funcions generadores exponencials bivariades corresponents com a solucions de certes equacions diferencials lineals. El mètode està basat en la representació de permutacions com a arbres binaris creixents i en mètodes simbòlics.La part final tracta de subseqüències generalitzades, que extenen tant la noció de subseqüències clàssiques com la de subparaules. Per algunes subseqüències obtenim nous resultats enumeratius. Finalment estudiem el comportament assimptòtic del nombre de permutacions de mida n que eviten una subseqüència generalitzada fixa quan n tendeix a infinit. També donem fites inferiors i superiors en el nombre de permutacions que eviten certes subseqüències.
2

METACOGNITION IN LEARNING ELEMENTARY PROBABILITY AND STATISTICS

RYSZ, TERI January 2004 (has links)
No description available.
3

Quantum statistics and the magnetocaloric effect

Sandberg, Anna January 2020 (has links)
Caloric materials show prospect in replacing the function of vaporcompression systems in todays cooling devices, resulting in more energy efficient cooling and eliminating the need for refrigerents which contribute to climate change. This project has focused on magnetocaloric materials, which experience changes in temperature when exposed to magnetic fields. A step to finding viable materials is developing realistic simulations. To this end, this project has investigated if the calculated magnetocaloric effect is impacted by the choice of statistic. Three systems have been studied, bcc Fe, FeRh and Fe2P, using Monte Carlo simulations. The results have shown differences in the calculated entropy change depending on the statistic of choice. The quantum statistics have shown a ∆S = 0 below the phase transition, unlike the classical statistics. At the phase tranisitions quantum statistics resulted in either similar or smaller values for the calculated change in entropy. / Kaloriska material har potential att i framtiden ersätta funktionen hos ångkomprimeringssystem i dagens kylapparater, vilket i sin tur kan leda till mer energieffektiv kylning samt eliminerar behovet av kylmedier som bidrar till klimatförändringen. I detta projekt ligger fokus på magnetokaloriska material, vilka erfar temperaturförändringar då de utsätts för magnetfält. Ett steg mot att hitta gångbara material är att utveckla realistiska simulationer. För detta ändamål undersöktes huruvida den beräknade magnetokaloriska effekten påverkas av valet av statistik. Tre system studerades, bcc Fe, FeRh samt Fe2P, med hjälp av Monte Carlo simulationer. Resultaten visade skillnader i den beräknade entropiförändringen beroende på valet av statistik. För kvantstatistiken var  ∆S = 0 för temperaturer under fasövergångerna, vilket skiljde sig från de klassiska resultaten. Vid fasövergångarna gav kvantstatistiken liknande eller mindre värden för den beräknade entropiförändringen.
4

Technology-enhanced statistics learning experiment:a case study at upper secondary level

Oikarinen, J. (Juho) 31 October 2016 (has links)
Abstract The aim of this study was to examine and develop statistics education by implementing computer supported collaborative learning (CSCL). This study has been influenced by design-based research, and it focuses on describing the statistical learning of upper secondary school students (N=138) in a CSCL environment, both quantitative and qualitative methods have been utilised. The present study is filling a gaping void in classroom study and disseminates new knowledge with a novel approach in combining CSCL, mathematics education at secondary level and statistical literacy. First, the students’ starting level in statistical literacy was assessed in the pre-test in which students’ perceptions and knowledge of statistics was evaluated. The results showed that students had a severe lack of understanding of basic statistical concepts. Second, CSCL supports students in collaborating asynchronously in different small-groups by using technology. Results suggest that studying in a group fostered their learning and the electronic and interactive material clarified learned topics which was designed by integrating the principles of cognitive theory of multimedia learning. Third, the shift from traditional didactic instruction towards student-centred CSCL learning was challenging for students. According to the results, students had only a few earlier experiences in learning CSCL environments. The quality of the students’ conversational acts varied considerably. It seems that learning how to collaborate productively needs practice. According to the results, the articulation and quality of mathematical discussion increased as students’ acquaintance with their teammates improved. Students’ collaboration in small groups was examined by using video analyses and content analyses. Contact summary sheet -instrument used in analyses facilitated observation of the magnitude and quality in student’s inter-subjective phenomena in collaborative learning. Fourth, students in the treatment group had better learning outcomes than students in the control group. The results suggest a statistically significant difference between treatment and control groups only in the delayed post-test and the effect size indicates a medium effect. The interactive material and CSCL seemed to foster and facilitate the development of statistical literacy. Nevertheless, students were critical of studying in the CSCL environment. / Tiivistelmä Tutkimuksen tarkoituksena oli tutkia ja kehittää tilastojen opetusta hyödyntäen tietokoneavusteista yhteisöllistä oppimista (CSCL). Tutkimus on saanut vaikutteita design-perustaisesta tutkimuksesta ja se keskittyy kuvaamaan lukio-opiskelijoiden (N=138) tilastojen oppimista CSCL-ympäristössä ja tutkimuksessa on hyödynnetty kvantitatiivisia ja kvalitatiivisia menetelmiä. Tämä tutkimus lisää tietämystä luokkahuonetutkimuksesta ja yhdistää CSCL:n ja tilastollisen lukutaidon opetuksen toisella asteella. Ensimmäiseksi oppilaiden tilastollisen lukutaidon lähtötaso mitattiin alkutestissä, missä arvioitiin heidän ennakkokäsityksiä ja tietoa tilastoista. Tutkimustulokset osoittivat, että oppilailla oli suuria vaikeuksia ymmärtää tilastollisia peruskäsitteitä. Toiseksi CSCL-teknologia tukee asynkronisesti pienryhmätyöskentelyä. Tulosten mukaan opiskelu ryhmissä tuki opiskelijoiden oppimista ja sähköinen ja interaktiivinen oppimateriaali selkiytti opeteltavia asioita, joka oli suunniteltu kognitiivisen multimedia oppimisteorian periaatteiden mukaisesti. Kolmanneksi opiskelijat kokivat haasteellisuutta siirryttäessä perinteisestä opettajajohtoisesta opetusmenetelmästä oppijakeskeiseen CSCL-oppimismenetelmään. Tulosten mukaan opiskelijoilla on ollut vain vähän aikaisempia kokemuksia oppimisesta CSCL-ympäristöissä. Opiskelijoiden funktionaalisten roolien laadut vaihtelivat huomattavasti. Näyttää ilmeiseltä, että produktiivisen yhteistoiminnallisuuden oppimiseen tarvitaan harjoittelua. Tutkimustulosten mukaan artikulaatio ja laatu matemaattisissa keskusteluissa lisääntyivät oppimistilanteissa, kun opiskelijoiden ryhmätyöskentelytaidot kehittyivät. Opiskelijoiden pienryhmätyöskentelyä tutkittiin video- ja sisällönanalyysin avulla. Analysoinnissa käytetty contact summary sheet -instrumentti auttoi havainnoimaan opiskelijoiden intersubjektiivista yhteistoiminnallisen oppimisen laatua ja määrää. Neljänneksi opetuskokeiluun osallistuneilla oli parempia oppimistuloksia verrattaessa kontrolliryhmän oppilaisiin. Tutkimustulosten mukaan tilastollisesti merkittävä ero oli havaittavissa opetuskokeilu- ja kontrolliryhmän välillä ainoastaan viivästetyssä lopputestissä ja vaikutuksen suuruus on keskivoimakasta. Interaktiivinen opetusmateriaali ja CSCL näyttäisivät edistävän tilastollisen lukutaidon kehittymistä. Tästä huolimatta, opiskelijat suhtautuivat kriittisesti opiskeluun CSCL-ympäristössä.
5

Restrictive ranking

Norman, James Everett January 1965 (has links)
This dissertation is a study of certain aspects of restricted ranking, a method intended for use by a panel of m judges evaluating the relative merits of N subjects, candidates tor scholarships, awards, etc. Each judge divides the N subjects into R classes, so that n₁ individuals receive a grade i (i = 1, 2, R; Σnᵢ = N) where the R numbers nᵢ are close to N/R (nᵢ = N/R when N is divisible by R) and are preassigned and the same for all judges. When this method is used, all subjects are treated alike, the grading system is the same for all judges and the grades of each judge are given equal weight. Equally important, the meaning of a particular grade is clear to each judge and the same for each judge. Under the null hypothesis that all nR = N subjects are of equal merit, tests of significance are developed to determine whether (1) a particular individual is superior or inferior to the rest of the subjects; (2) two particular subjects are of equal merit; (3) the individuals with the highest and lowest scores are respectively superior and interior to the rest of the subjects and (4) the nR subjects form a homogeneous group. The critical values of the test statistics for (1), (2) and (3) are tabled for small to moderate values of m, an approximation based on the asymptotic normality of the appropriate test statistic proving suitable for large m. The test of homogeneity (4) employs a sum of squares of subjects’ scores which is shown to be asymptotically distributed for m→∞ as chi-square with nR-1 degrees of freedom. For the special case of complete ranking (R=N), this statistic is identical to one proposed by Friedman (1937) form rankings. The behavior of two of these tests is theoretically investigated for the non-null case of nR-1 subjects having equal merit and one "outlying" subject whose merit exceeds the others. The assumption is made that each judge j assigns a grade to every subject i on the basis of a "subjective random variable" xᵢⱼ with mean equal to the "true" merit of subject i and that the distribution of xᵢⱼ is the same for all j. The probability, P(δ), that subject #1 with true mean differing from the others by an amount δ would receive a significantly high score according to the test for outliers is obtained and presented graphically as a function of for xᵢⱼ distributed as (1/2) sech² (x-δ) and also as N ( δ, 1). Using a result due to Hannan (1956), an expression for the asymptotic relative efficiency of the chi-squared homogeneity test for restricted vs. complete ranking for the aforementioned non-null case is obtained and values of this A.R.E. for 2 ≤n≤10 and. 2≤R≤8 are tabled. This A.R.E. is found to be at least 0.9 for all cases where n≤10 and R≥4. A further comparison of the performances of restricted (R) and complete (C) ranking is made by way of some simulation studies performed on a high speed digital computer tor the non- null ease where xᵢⱼ is normally distributed with unit variance and a mean δ₁ having as many as three different possible values. The complete and restricted ranks assigned by the jth judge to the ith subject are assigned on the basis of the value of xᵢⱼ obtained by experimental sampling using a random normal number generator in the computer program. A group of Nₛ subjects with the highest rank sums for (R) and for (C) are then selected in each study. The observed difference in true means between selected and remaining groups is then used as a measure of goodness of the two selection procedures. The results of these studies are presented graphically, displaying a very close agreement between (R) and (C) in all instances. / Ph. D.
6

A study of statistical and deterministic models for the prediction of the composition of a mixture

Myers, Raymond H. 07 April 2010 (has links)
This thesis is a study of various physical and statistical models which may be useful for the prediction of the composition of a ternary liquid mixture. The particular mixture considered in this study was the solvent system consisting of nitroglycerine (NG), 2-nitrodiphenylamine (2NDPA), and triacetin (TA). Several models were investigated for their adequacy and closeness of fit. An attempt has been made to relate the actual composition to a few easily measurable quantities, namely, refractive index, density, and the separate analysis of 2NDPA. Deterministic models relating the concentration of each component in the mixture with the physical determinations mentioned above have been considered first. These models are based on the known theory of physical chemistry. The deterministic model which was chosen as "best" in terms of the smallness of error of prediction, estimates the composition from the determination of density and the spectrophotometer analysis of 2NDPA. Since the latter analysis is a quick and accurate determination of the 2NDPA content and since the content of the third component could be determined by complementing to 100 percent, the models have been formulated in terms of the concentration of only one component, namely NG. The statistical models under investigation are divided into activity models and regression models. The activity model is a combination of chemical and statistical theory while the simple regression model represents an approach that a statistician might take if he disregarded the physical or chemical theory involved. Two activity models have been discussed, the first assuming the activity of the mixture constant and the second assuming the activity of the mixture to be a weighted sum of the activities of the three components. Tests of hypotheses are made to determine whether the activity models result in a significant reduction in error over that of the "best" deterministic model. The investigation that the model formulated assuming constant activity of the mixture results in the smallest error of estimation among all models under study. Thus it has been used as a basis for the preparation of control charts. The linear regression models, constructed with various functions of density, refractive index, and spectrophotometer analysis as independent variables, produced errors of estimation above those for the deterministic model. Chapter IV represents the summary of the thesis and the translation of findings into actual control charts. On the basis of this chapter, a technician can easily determine the estimate of the composition of the mixture and the attached 99 percent confidence bounds. Thus, this Chapter contains these charts combined with instructions in their use and numerical examples. / Master of Science
7

A power study of multiple range and multiple F tests

Wine, R. Lowell 12 January 2010 (has links)
In the study of multiple comparisons tests the following topics were discussed: (i) extension to the general case of certain properties and results which previously had been given for three and four means only, (ii) power vectors and average power, (iii) expressions of power for the multiple range tests and for the multiple F test involving only three means, and (iv) methods for evaluating power. These four topics are amplified in order in the four paragraphs below. A set of recursion formulas was obtained for enumerating the decision patterns for n means. x<sub>i</sub> was given by the equation See: Equation where i = 1, 2,...,n-1. In IV, formulas were derived which express d<sub>j,i+1</sub> as a function of the x<sub>i</sub>. This made possible the writing of the bounding equations for any decision region involving n means. The regions (1,2), (1,2,3), (1,2,4), ..., (1,2....,n) for the multiple range tests were described in the sample-space of differences among n means. / Ph. D.
8

Sequential design augmentation with model misspecification

Sutherland, Sindee S. 03 October 2007 (has links)
In Response Surface Methodology (RSM) one attempts to model some variable of interest, usually as a known function of design variables. Subsequent analysis often indicates a need to move to a new region of interest. Many times the design is augmented by adding points sequentially to this new region of interest. Current methods of sequential design augmentation are used under the assumption of either correctly specified models or misspecification in the user’s model that can be quantified, such as using a first order model when a second order model is correct. However, under model misspecification the sequential placement of points in the new region of interest using usual augmentation techniques may not be optimal, especially if the misspecification in the model is not due to polynomial terms. A new methodology, based on a modified kernel regression procedure called HATLINK, is presented that incorporates model misspecification into the sequential augmentation of points in the new region. HATLINK is a combination of parametric and nonparametric regressions and is designed to perform best when the user has specified a reasonable approximate model. Parametric regression supplies a basic fit, while nonparametric regression allows adjustments to compensate for some misspecification in the parametric model. The mixing parameter is determined adaptively through cross-validation. The augmentation is performed by a new technique called BIIV, the bias-influenced integrated prediction variance. BIIV attempts to select points that both minimizes the integrated prediction variance and the location where the current fit is the worst. Thus, BIIV incorporates an estimate of the bias due to misspecification of the parametric model into the augmentation procedure. It is shown that the designs generated by sequential design augmentation using HATLINK and BIIV are superior to designs from other methods. / Ph. D.
9

The relationships between discrete and continuous probability distributions

Patel, Jagdishbhai Nagjibhai January 1962 (has links)
Though some of the discrete distributions, for example the binomial, hypergeometric, Poisson, are well tabulated, often statisticians use the percentage points of approximating continuous distributions when analysing discrete data. In this thesis, the exact relationships between certain discrete and continuous distributions are established, and these relationships are used for setting confidence limits and significance testing of hypotheses. In Chapter 1, statements of all distributions and mathematical functions used in this thesis are made, and also some approximations are mentioned without proofs. In Chapter 2, exact relationships between discrete distributions (the binomial, negative-binomial, and Poisson) and continuous distributions (the F and χ²) are proved. In Chapter 3, use is made of the approximate and exact relationships between discrete and continuous distributions, for setting confidence limits on the parameters of the discrete distributions. Chapter 4 consists of the approximate and exact significance testing of hypotheses by using the approximate and exact relationships, given in Chapter 2. In Chapter 5, two-sample, exact and approximate, significance tests of hypotheses on the Poisson distribution are performed, in the case of fixed number of events experimentation and fixed time experimentation. / M.S.
10

Some considerations of an optimum sample size for a one-stage sampling procedure

Zakich, Daniel 16 February 2010 (has links)
The purpose of this work is to discover an optimum sample size to be used for deciding between two methods (populations) to choose for future production. The procedure involves the formulation of a loss function, expressing the expected loss due to choosing the population with the small mean, as a function of the difference between the population means, the amount to be produced and the cost of sampling. A minimax procedure is applied to obtain the optimum sample size. Since the function does not lend itself conveniently to mathematical considerations, special cases involving the difference between the means are considered and an optimum sample size is found for these cases. In all cases, the optimum sample size is an explicit function of the amount to be produced, the cost of sampling and the standard deviation. / Master of Science

Page generated in 0.1609 seconds