Spelling suggestions: "subject:"logarithmic transformation"" "subject:"1ogarithmic transformation""
1 |
Studium plastických vlastností formovacích směsí / Research plasticity of foundry sandsMacků, Martin January 2010 (has links)
The subject of this thesis was to develop a comprehensive methodology for evaluating the plasticity of molding sand. This study was focused on four types of mixtures that are used in the foundry industry. For the evaluation of plasticity was important to provide an indicator of the deformation ability, calculation of deformation and logarithmic transformation for compression. Plasticity methodology was applied in this work only on the pressure effect. Studies of this issue can have a great influence on the production of correct forms due to the ability to withstand tension without breaking form.
|
2 |
Bayesian Models for the Analyzes of Noisy Responses From Small Areas: An Application to Poverty EstimationManandhar, Binod 26 April 2017 (has links)
We implement techniques of small area estimation (SAE) to study consumption, a welfare indicator, which is used to assess poverty in the 2003-2004 Nepal Living Standards Survey (NLSS-II) and the 2001 census. NLSS-II has detailed information of consumption, but it can give estimates only at stratum level or higher. While population variables are available for all households in the census, they do not include the information on consumption; the survey has the `population' variables nonetheless. We combine these two sets of data to provide estimates of poverty indicators (incidence, gap and severity) for small areas (wards, village development committees and districts). Consumption is the aggregate of all food and all non-food items consumed. In the welfare survey the responders are asked to recall all information about consumptions throughout the reference year. Therefore, such data are likely to be noisy, possibly due to response errors or recalling errors. The consumption variable is continuous and positively skewed, so a statistician might use a logarithmic transformation, which can reduce skewness and help meet the normality assumption required for model building. However, it could be problematic since back transformation may produce inaccurate estimates and there are difficulties in interpretations. Without using the logarithmic transformation, we develop hierarchical Bayesian models to link the survey to the census. In our models for consumption, we incorporate the `population' variables as covariates. First, we assume that consumption is noiseless, and it is modeled using three scenarios: the exponential distribution, the gamma distribution and the generalized gamma distribution. Second, we assume that consumption is noisy, and we fit the generalized beta distribution of the second kind (GB2) to consumption. We consider three more scenarios of GB2: a mixture of exponential and gamma distributions, a mixture of two gamma distributions, and a mixture of two generalized gamma distributions. We note that there are difficulties in fitting the models for noisy responses because these models have non-identifiable parameters. For each scenario, after fitting two hierarchical Bayesian models (with and without area effects), we show how to select the most plausible model and we perform a Bayesian data analysis on Nepal's poverty data. We show how to predict the poverty indicators for all wards, village development committees and districts of Nepal (a big data problem) by combining the survey data with the census. This is a computationally intensive problem because Nepal has about four million households with about four thousand households in the survey and there is no record linkage between households in the survey and the census. Finally, we perform empirical studies to assess the quality of our survey-census procedure.
|
3 |
Estimating the Ratio of Two Poisson RatesPrice, Robert M., Bonett, Douglas G. 01 September 2000 (has links)
Classical and Bayesian methods for interval estimation of the ratio of two independent Poisson rates are examined and compared in terms of their exact coverage properties. Two methods to determine sampling effort requirements are derived.
|
4 |
Wiederholungen in TextenGolcher, Felix 16 December 2013 (has links)
Diese Arbeit untersucht vollständige Zeichenkettenfrequenzverteilungen natürlichsprachiger Texte auf ihren linguistischen und anwendungsbezogenen Gehalt. Im ersten Teil wird auf dieser Datengrundlage ein unüberwachtes Lernverfahren entwickelt, das Texte in Morpheme zerlegt. Die Zerlegung geht von der Satzebene aus und verwendet jegliche vorhandene Kontextinformation. Es ergibt sich ein sprachunabhängiger Algorithmus, der die gefundenen Morpheme teilweise zu Baumstrukturen zusammenordnet. Die Evaluation der Ergebnisse mit Hilfe statistischer Modelle ermöglicht die Identifizierung auch kleiner Performanzunterschiede. Diese sind einer linguistischen Interpretation zugänglich. Der zweite Teil der Arbeit besteht aus stilometrischen Untersuchungen anhand eines Textähnlichkeitsmaßes, das ebenfalls auf vollständigen Zeichenkettenfrequenzen beruht. Das Textähnlichkeitsmaß wird in verschiedenen Varianten definiert und anhand vielfältiger stilometrischer Fragestellungen und auf Grundlage unterschiedlicher Korpora ausgewertet. Dabei ist ein wiederholter Vergleich mit der Performanz bisheriger Forschungsansäzte möglich. Die Performanz moderner Maschinenlernverfahren kann mit dem hier vorgestellten konzeptuell einfacheren Verfahren reproduziert werden. Während die Segmentierung in Morpheme ein lokaler Vorgang ist, besteht Stilometrie im globalen Vergleich von Texten. Daher bietet die Untersuchung dieser zwei unverbunden scheinenden Fragestellungen sich gegenseitig ergänzende Perspektiven auf die untersuchten Häufigkeitsdaten. Darüber hinaus zeigt die Diskussion der rezipierten Literatur zu beiden Themen ihre Verbindungen durch verwandte Konzepte und Denkansätze auf. Aus der Gesamtheit der empirischen Untersuchungen zu beiden Fragestellungen kann abgeleitet werden, dass den längeren und damit selteneren Zeichenketten wesentlich mehr Informationsgehalt innewohnt, als in der bisherigen Forschung gemeinhin angenommen wird. / This thesis investigates the linguistic and application specific content of complete character substring frequency distributions of natural language texts. The first part develops on this basis an unsupervised learning algorithm for segmenting text into morphemes. The segmentation starts from the sentence level and uses all available context information. The result is a language independent algorithm which arranges the found morphemes partly into tree like structures. The evaluation of the output using advanced statistical modelling allows for identifying even very small performance differences. These are accessible to linguistic interpretation. The second part of the thesis consists of stylometric investigations by means of a text similarity measure also rooted in complete substring frequency statistics. The similarity measure is defined in different variants and evaluated for various stylometric tasks and on the basis of diverse corpora. In most of the case studies the presented method can be compared with publicly available performance figures of previous research. The high performance of modern machine learning methods is reproduced by the considerably simpler algorithm developed in this thesis. While the segmentation into morphemes is a local process, stylometry consists in the global comparison of texts. For this reason investigating of these two seemingly unconnected problems offers complementary perspectives on the explored frequency data. The discussion of the recieved litarature concerning both subjects additionally shows their connectedness by related concepts and approaches. It can be deduced from the totality of the empirical studies on text segmentation and stylometry conducted in this thesis that the long and rare character sequences contain considerably more information then assumed in previous research.
|
5 |
Applications of recurrence relationChuang, Ching-hui 26 June 2007 (has links)
Sequences often occur in many branches of applied mathematics. Recurrence
relation is a powerful tool to characterize and study sequences. Some
commonly used methods for solving recurrence relations will be investigated.
Many examples with applications in algorithm, combination, algebra, analysis,
probability, etc, will be discussed. Finally, some well-known contest
problems related to recurrence relations will be addressed.
|
6 |
Vybrané transformace náhodných veličin užívané v klasické lineární regresi / Selected random variables transformations used in classical linear regressionTejkal, Martin January 2017 (has links)
Klasická lineární regrese a z ní odvozené testy hypotéz jsou založeny na předpokladu normálního rozdělení a shodnosti rozptylu závislých proměnných. V případě že jsou předpoklady normality porušeny, obvykle se užívá transformací závisle proměnných. První část této práce se zabývá transformacemi stabilizujícími rozptyl. Značná pozornost je udělena náhodným veličinám s Poissonovým a negativně binomickým rozdělením, pro které jsou studovány zobecněné transformace stabilizující rozptyl obsahující parametry v argumentu navíc. Pro tyto parametry jsou stanoveny jejich optimální hodnoty. Cílem druhé části práce je provést srovnání transformací uvedených v první části a dalších často užívaných transformací. Srovnání je provedeno v rámci analýzy rozptylu testováním hypotézy shodnosti středních hodnot p nezávislých náhodných výběrů s pomocí F testu. V této části jsou nejprve studovány vlastnosti F testu za předpokladu shodných a neshodných rozptylů napříč výběry. Následně je provedeno srovnání silofunkcí F testu aplikovaného pro p výběrů z Poissonova rozdělení transformovanými odmocninovou, logaritmickou a Yeo Johnsnovou transformací a z negativně binomického rozdělení transformovaného argumentem hyperbolického sinu, logaritmickou a Yeo-Johnsnovou transformací.
|
Page generated in 0.1353 seconds