• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 54
  • 46
  • 28
  • 12
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Statistické vlastnosti mikrostruktury dopravního proudu / Statistical characteristics of the traffic flow microstructure

Apeltauer, Jiří Unknown Date (has links)
The actual traffic flow theory assumes interactions only between neighbouring vehicles within the traffic. This assumption is reasonable, but it is based on the possibilities of science and technology available decades ago, which are currently overcome. Obviously, in general, there is an interaction between vehicles at greater distances (or between multiple vehicles), but at the time, no procedure has been put forward to quantify the distance of this interaction. This work introdukce a method, which use mathematical statistics and precise measurement of time distances of individual vehicles, which allows to determine these interacting distances (between several vehicles) and its validation for narrow densities of traffic flow. It has been revealed that at high traffic flow densities there is an interaction between at least three consecutive vehicles and four and five vehicles at lower densities. Results could be applied in the development of new traffic flow models and its verification.
42

Inference for Birnbaum-Saunders, Laplace and Some Related Distributions under Censored Data

Zhu, Xiaojun 06 May 2015 (has links)
The Birnbaum-Saunders (BS) distribution is a positively skewed distribution and is a popular model for analyzing lifetime data. In this thesis, we first develop an improved method of estimation for the BS distribution and the corresponding inference. Compared to the maximum likelihood estimators (MLEs) and the modified moment estimators (MMEs), the proposed method results in estimators with smaller bias, but having the same mean squared errors (MSEs) as these two estimators. Next, the existence and uniqueness of the MLEs of the parameters of BS distribution are discussed based on Type-I, Type-II and hybrid censored samples. In the case of five-parameter bivariate Birnbaum-Saunders (BVBS) distribution, we use the distributional relationship between the bivariate normal and BVBS distributions to propose a simple and efficient method of estimation based on Type-II censored samples. Regression analysis is commonly used in the analysis of life-test data when some covariates are involved. For this reason, we consider the regression problem based on BS and BVBS distributions and develop the associated inferential methods. One may generalize the BS distribution by using Laplace kernel in place of the normal kernel, referred to as the Laplace BS (LBS) distribution, and it is one of the generalized Birnbaum-Saunders (GBS) distributions. Since the LBS distribution has a close relationship with the Laplace distribution, it becomes necessary to first carry out a detailed study of inference for the Laplace distribution before studying the LBS distribution. Several inferential results have been developed in the literature for the Laplace distribution based on complete samples. However, research on Type-II censored samples is somewhat scarce and in fact there is no work on Type-I censoring. For this reason, we first start with MLEs of the location and scale parameters of Laplace distribution based on Type-II and Type-I censored samples. In the case of Type-II censoring, we derive the exact joint and marginal moment generating functions (MGF) of the MLEs. Then, using these expressions, we derive the exact conditional marginal and joint density functions of the MLEs and utilize them to develop exact confidence intervals (CIs) for some life parameters of interest. In the case of Type-I censoring, we first derive explicit expressions for the MLEs of the parameters, and then derive the exact conditional joint and marginal MGFs and use them to derive the exact conditional marginal and joint density functions of the MLEs. These densities are used in turn to develop marginal and joint CIs for some quantities of interest. Finally, we consider the LBS distribution and formally show the different kinds of shapes of the probability density function (PDF) and the hazard function. We then derive the MLEs of the parameters and prove that they always exist and are unique. Next, we propose the MMEs, which can be used as initial values in the numerical computation of the MLEs. We also discuss the interval estimation of parameters. / Thesis / Doctor of Science (PhD)
43

Redução no tamanho da amostra de pesquisas de entrevistas domiciliares para planejamento de transportes: uma verificação preliminar / Reduction in sample size of household interview research for transportation planning: a preliminary check

Aguiar, Marcelo Figueiredo Massulo 11 August 2005 (has links)
O trabalho tem por principal objetivo verificar, preliminarmente, a possibilidade de reduzir a quantidade de indivíduos na amostra de Pesquisa de Entrevistas Domiciliares, sem prejudicar a qualidade e representatividade da mesma. Analisar a influência das características espaciais e de uso de solo da área urbana constitui o objetivo intermediário. Para ambos os objetivos, a principal ferramenta utilizada foi o minerador de dados denominado Árvore de Decisão e Classificação contido no software S-Plus 6.1, que encontra as relações entre as características socioeconômicas dos indivíduos, as características espaciais e de uso de solo da área urbana e os padrões de viagens encadeadas. Os padrões de viagens foram codificados em termos de sequência cronológica de: motivos, modos, durações de viagem e períodos do dia em que as viagens ocorrem. As análises foram baseadas nos dados da Pesquisa de Entrevistas Domiciliares realizada pela Agência de Cooperação Internacional do Japão e Governo do Estado do Pará em 2000 na Região Metropolitana de Belém. Para se atingir o objetivo intermediário o método consistiu em analisar, através da Árvore de Decisão e Classificação, a influência da variável categórica Macrozona, que representa as características espaciais e de uso de solo da área urbana, nos padrões de viagens encadeadas realizados pelos indivíduos. Para o objetivo principal, o método consistiu em escolher, aleatoriamente, sub-amostras contendo 25% de pessoas da amostra final e verificar, através do Processamento de Árvores de Decisão e Classificação e do teste estatístico Kolmogorov - Smirnov, se os modelos obtidos a partir das amostras reduzidas conseguem ilustrar bem a freqüência de ocorrência dos padrões de viagens das pessoas da amostra final. Concluiu-se que as características espaciais e de uso de solo influenciam os padrões de encadeamento de viagens, e portanto foram incluídas como variáveis preditoras também nos modelos obtidos a partir das sub-amostras. A conclusão principal foi a não rejeição da hipótese de que é possível reduzir o tamanho da amostra de pesquisas domiciliares para fins de estudo do encadeamento de viagens. Entretanto ainda são necessárias muitas outras verificações antes de aceitar esta conclusão. / The main aim of this work is to verify, the possibility of reducing the sample size in home-interview surveys, without being detrimental to the quality and representation. The sub aim of this work is to analyze the influence of spatial characteristics and land use of an urban area. For both aims, the main analyses tool used was Data Miner called the Decision and Classification Tree which is in the software S-Plus 6.1. The Data Miner finds relations between trip chaining patterns and individual socioeconomic characteristics, spatial characteristics and land use patterns. The trip chaining patterns were coded in terms of chronological sequence of trip purpose, travel mode, travel time and the period of day in which travel occurs. The analyses were based on home-interview surveys carried out in the Belém Metropolitan Area in 2000, by Japan International Cooperation Agency and Pará State Government. In order to achieve the sub aim of this work, the method consisted of analyzing, using the Decision and Classification Tree, the influence of the categorical variable \"Macrozona\", which represents spatial characteristics and urban land use patterns, in trip chaining patterns carried by the individuals. Concerning the main aim, the method consisted of choosing sub-samples randomly containing 25% of the final sample of individuals and verifying (using Decision and Classification Tree and Kolmogorov-Smirnov statistical test) whether the models obtained from the reduced samples can describe the frequency of the occurrence of the individuals trip chaining patterns in the final sample well. The first conclusion is that spatial characteristics and land use of the urban area have influenced the trip chaining patterns, and therefore they were also included as independent variables in the models obtained from the sub-samples. The main conclusion was the non-rejection of the hypothesis that it is possible to reduce the sample size in home-interview surveys used for trip-chaining research. Nevertheless, several other verifications are necessary before accepting this conclusion.
44

Redução no tamanho da amostra de pesquisas de entrevistas domiciliares para planejamento de transportes: uma verificação preliminar / Reduction in sample size of household interview research for transportation planning: a preliminary check

Marcelo Figueiredo Massulo Aguiar 11 August 2005 (has links)
O trabalho tem por principal objetivo verificar, preliminarmente, a possibilidade de reduzir a quantidade de indivíduos na amostra de Pesquisa de Entrevistas Domiciliares, sem prejudicar a qualidade e representatividade da mesma. Analisar a influência das características espaciais e de uso de solo da área urbana constitui o objetivo intermediário. Para ambos os objetivos, a principal ferramenta utilizada foi o minerador de dados denominado Árvore de Decisão e Classificação contido no software S-Plus 6.1, que encontra as relações entre as características socioeconômicas dos indivíduos, as características espaciais e de uso de solo da área urbana e os padrões de viagens encadeadas. Os padrões de viagens foram codificados em termos de sequência cronológica de: motivos, modos, durações de viagem e períodos do dia em que as viagens ocorrem. As análises foram baseadas nos dados da Pesquisa de Entrevistas Domiciliares realizada pela Agência de Cooperação Internacional do Japão e Governo do Estado do Pará em 2000 na Região Metropolitana de Belém. Para se atingir o objetivo intermediário o método consistiu em analisar, através da Árvore de Decisão e Classificação, a influência da variável categórica Macrozona, que representa as características espaciais e de uso de solo da área urbana, nos padrões de viagens encadeadas realizados pelos indivíduos. Para o objetivo principal, o método consistiu em escolher, aleatoriamente, sub-amostras contendo 25% de pessoas da amostra final e verificar, através do Processamento de Árvores de Decisão e Classificação e do teste estatístico Kolmogorov - Smirnov, se os modelos obtidos a partir das amostras reduzidas conseguem ilustrar bem a freqüência de ocorrência dos padrões de viagens das pessoas da amostra final. Concluiu-se que as características espaciais e de uso de solo influenciam os padrões de encadeamento de viagens, e portanto foram incluídas como variáveis preditoras também nos modelos obtidos a partir das sub-amostras. A conclusão principal foi a não rejeição da hipótese de que é possível reduzir o tamanho da amostra de pesquisas domiciliares para fins de estudo do encadeamento de viagens. Entretanto ainda são necessárias muitas outras verificações antes de aceitar esta conclusão. / The main aim of this work is to verify, the possibility of reducing the sample size in home-interview surveys, without being detrimental to the quality and representation. The sub aim of this work is to analyze the influence of spatial characteristics and land use of an urban area. For both aims, the main analyses tool used was Data Miner called the Decision and Classification Tree which is in the software S-Plus 6.1. The Data Miner finds relations between trip chaining patterns and individual socioeconomic characteristics, spatial characteristics and land use patterns. The trip chaining patterns were coded in terms of chronological sequence of trip purpose, travel mode, travel time and the period of day in which travel occurs. The analyses were based on home-interview surveys carried out in the Belém Metropolitan Area in 2000, by Japan International Cooperation Agency and Pará State Government. In order to achieve the sub aim of this work, the method consisted of analyzing, using the Decision and Classification Tree, the influence of the categorical variable \"Macrozona\", which represents spatial characteristics and urban land use patterns, in trip chaining patterns carried by the individuals. Concerning the main aim, the method consisted of choosing sub-samples randomly containing 25% of the final sample of individuals and verifying (using Decision and Classification Tree and Kolmogorov-Smirnov statistical test) whether the models obtained from the reduced samples can describe the frequency of the occurrence of the individuals trip chaining patterns in the final sample well. The first conclusion is that spatial characteristics and land use of the urban area have influenced the trip chaining patterns, and therefore they were also included as independent variables in the models obtained from the sub-samples. The main conclusion was the non-rejection of the hypothesis that it is possible to reduce the sample size in home-interview surveys used for trip-chaining research. Nevertheless, several other verifications are necessary before accepting this conclusion.
45

Conformidade à lei de Newcomb-Benford de grandezas astronômicas segundo a medida de Kolnogorov-Smirnov

ALENCASTRO JUNIOR, José Vianney Mendonça de 09 September 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-02-21T15:12:08Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_JoséVianneyMendonçaDeAlencastroJr.pdf: 648691 bytes, checksum: f2fbc98e547f0284f5aef34aee9249ca (MD5) / Made available in DSpace on 2017-02-21T15:12:08Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação_JoséVianneyMendonçaDeAlencastroJr.pdf: 648691 bytes, checksum: f2fbc98e547f0284f5aef34aee9249ca (MD5) Previous issue date: 2016-09-09 / A lei de Newcomb-Benford, também conhecida como a lei do dígito mais significativo, foi descrita pela primeira vez por Simon Newcomb, sendo apenas embasada estatisticamente após 57 anos pelo físico Frank Benford. Essa lei rege grandezas naturalmente aleatórias e tem sido utilizada por várias áreas como forma de selecionar e validar diversos tipos de dados. Em nosso trabalho tivemos como primeiro objetivo propor o uso de um método substituto ao qui-quadrado, sendo este atualmente o método comumente utilizado pela literatura para verificação da conformidade da Lei de Newcomb-Benford. Fizemos isso pois em uma massa de dados com uma grande quantidade de amostras o método qui-quadrado tende a sofrer de um problema estatístico conhecido por excesso de poder, gerando assim resultados do tipo falso negativo na estatística. Dessa forma propomos a substituição do método qui-quadrado pelo método de Kolmogorov-Smirnov baseado na Função de Distribuição Empírica para análise da conformidade global, pois esse método é mais robusto não sofrendo do excesso de poder e também é mais fiel à definição formal da Lei de Benford, já que o mesmo trabalha considerando as mantissas ao invés de apenas considerar dígitos isolados. Também propomos investigar um intervalo de confiança para o Kolmogorov-Smirnov baseando-nos em um qui-quadrado que não sofre de excesso de poder por se utilizar o Bootstraping. Em dois artigos publicados recentemente, dados de exoplanetas foram analisados e algumas grandezas foram declaradas como conformes à Lei de Benford. Com base nisso eles sugerem que o conhecimento dessa conformidade possa ser usado para uma análise na lista de objetos candidatos, o que poderá ajudar no futuro na identificação de novos exoplanetas nesta lista. Sendo assim, um outro objetivo de nosso trabalho foi explorar diversos bancos e catálogos de dados astronômicos em busca de grandezas, cuja a conformidade à lei do dígito significativo ainda não seja conhecida a fim de propor aplicações práticas para a área das ciências astronômicas. / The Newcomb-Benford law, also known as the most significant digit law, was described for the first time by astronomer and mathematician Simon Newcomb. This law was just statistically grounded after 57 years after the Newcomb’s discovery. This law governing naturally random greatness and, has been used by many knowledge areas to validate several kind of data. In this work, the first goal is propose a substitute of qui-square method. The qui-square method is the currently method used in the literature to verify the Newcomb-Benford Law’s conformity. It’s necessary because in a greatness with a big quantity of samples, the qui-square method can has false negatives results. This problem is named Excess of Power. Because that, we proposed to use the Kolmogorov-Smirnov method based in Empirical Distribution Function (EDF) to global conformity analysis. Because this method is more robust and not suffering of the Excess of Power problem. The Kolmogorov-Smirnov method also more faithful to the formal definition of Benford’s Law since the method working considering the mantissas instead of single digits. We also propose to invetigate a confidence interval for the Kolmogorov-Smirnov method based on a qui-square with Bootstrapping strategy which doesn’t suffer of Excess of Power problem. Recently, two papers were published. I this papaers exoplanets data were analysed and some greatness were declared conform to a Newcomb-Benford distribution. Because that, the authors suggest that knowledge of this conformity can be used for help in future to indentify new exoplanets in the candidates list. Therefore, another goal of this work is explorer a several astronomicals catalogs and database looking for greatness which conformity of Benford’s law is not known yet. And after that , the authors suggested practical aplications for astronomical sciences area.
46

Míry kvality klasifikačních modelů a jejich převod / Quality measures of classification models and their conversion

Hanusek, Lubomír January 2003 (has links)
Predictive power of classification models can be evaluated by various measures. The most popular measures in data mining (DM) are Gini coefficient, Kolmogorov-Smirnov statistic and lift. These measures are each based on a completely different way of calculation. If an analyst is used to one of these measures it can be difficult for him to asses the predictive power of a model evaluated by another measure. The aim of this thesis is to develop a method how to convert one performance measure into another. Even though this thesis focuses mainly on the above-mentioned measures, it deals also with other measures like sensitivity, specificity, total accuracy and area under ROC curve. During development of DM models you may need to work with a sample that is stratified by values of the target variable Y instead of working with the whole population containing millions of observations. If you evaluate a model developed on a stratified data you may need to convert these measures to the whole population. This thesis describes a way, how to carry out this conversion. A software application (CPM) enabling all these conversions makes part of this thesis. With this application you can not only convert one performance measure to another, but you can also convert measures calculated on a stratified sample to the whole population. Besides the above mentioned performance measures (sensitivity, specificity, total accuracy, Gini coefficient, Kolmogorov-Smirnov statistic), CPM will also generate confusion matrix and performance charts (lift chart, gains chart, ROC chart and KS chart). This thesis comprises the user manual to this application as well as the web address where the application can be downloaded. The theory described in this thesis was verified on the real data.
47

Paramètres minéralogiques et microtexturaux utilisables dans les études de traçabilité des minerais métalliques

Machault, Julie 07 November 2012 (has links) (PDF)
Que ce soit à des fins spéculatives ou pour financer des conflits armés, une grande opacité entoure les filières des concentrés de matières premières minérales dont la demande ne cesse d'augmenter. Compte-tenu de l'éloignement entre les sites primaires d'extraction et les sites de production de produits finis, il est difficile d'identifier l'origine de ces produits. Dans un souci de traçabilité des concentrés, l'établissement d'une carte d'identité du minerai permettrait le contrôle des échanges dans l'industrie minérale. Le problème peut être posé en termes d'inversion: remonter au minerai d'origine en étudiant le produit vendu. Deux stades doivent être distingués: 1) la caractérisation du minerai brut et 2) la " perte de mémoire " des caractéristiques du tout-venant au cours du traitement minéralurgique. Les paramètres retenus sont la composition minéralogique, l'identification de microfaciès caractéristiques des minéraux cibles, la pseudo-succession paragénétique, le contenu et la distribution en éléments mineurs de minéraux cibles. Les minéraux cibles retenus sont la pyrite pour son ubiquité, la sphalérite car elle est susceptible d'incorporer une grande variété d'éléments mineurs, éventuellement valorisants ainsi que la chalcopyrite car elle est souvent liée aux deux autres minéraux. La comparaison de la composition chimique des phases minérales est effectuée en calculant la distance de Kolmogorov-Smirnov et de Colin-White. Des tests ont été réalisés sur les gîtes de type amas sulfuré volcanogène. Ils ont montré que les caractéristiques retenues permettaient de distinguer les pyrites, les sphalérites et les chalcopyrites de deux gisements de la province Sud-Ibérique (IPB), de sept gisements de la province d'Oural et du fumeur noir actuel de Rainbow. Les cartes d'identité obtenues permettent de discriminer les différents sites (IPB, Oural et Rainbow) et les gisements d'une même province. Les paramètres minéralogiques et microtexturaux ont également été suivis au cours du traitement minéralurgique de la mine de Neves Corvo. Pour une chaîne de traitement donnée, le paramètre " perte de mémoire " est une estimation de l'erreur commise lors de l'inversion, mais aussi une façon de caractériser une succession d'opérations minéralurgiques.
48

Automatic State Construction using Decision Trees for Reinforcement Learning Agents

Au, Manix January 2005 (has links)
Reinforcement Learning (RL) is a learning framework in which an agent learns a policy from continual interaction with the environment. A policy is a mapping from states to actions. The agent receives rewards as feedback on the actions performed. The objective of RL is to design autonomous agents to search for the policy that maximizes the expectation of the cumulative reward. When the environment is partially observable, the agent cannot determine the states with certainty. These states are called hidden in the literature. An agent that relies exclusively on the current observations will not always find the optimal policy. For example, a mobile robot needs to remember the number of doors went by in order to reach a specific door, down a corridor of identical doors. To overcome the problem of partial observability, an agent uses both current and past (memory) observations to construct an internal state representation, which is treated as an abstraction of the environment. This research focuses on how features of past events are extracted with variable granularity regarding the internal state construction. The project introduces a new method that applies Information Theory and decision tree technique to derive a tree structure, which represents the state and the policy. The relevance, of a candidate feature, is assessed by the Information Gain Ratio ranking with respect to the cumulative expected reward. Experiments carried out on three different RL tasks have shown that our variant of the U-Tree (McCallum, 1995) produces a more robust state representation and faster learning. This better performance can be explained by the fact that the Information Gain Ratio exhibits a lower variance in return prediction than the Kolmogorov-Smirnov statistical test used in the original U-Tree algorithm.
49

Statistické metody pro popis provozu restaurace / Statistical Methods for Description of Running a Restaurant

Novotná, Lenka January 2010 (has links)
The diploma thesis is written with a view to illustrate application of statistical methods describing progress of economical processes in company. The thesis is divided into two separated parts. First part focuses on theoretical pieces of knowledge about control charts and time series. Second part is composed from chapters that are focused on its practical usage. As simple application for control chart making is enclosed.
50

Využití regulačních diagramů pro kontrolu jakosti / Use of Contol Charts in Quality Control

Ječmínková, Michaela January 2014 (has links)
This diploma thesis deals with use of Shewhart Control Charts in quality control. The thesis describes the currently used process of quality control in the enterprise. Afterwards practical guidance for implementation of the statistical process control of the selected component and evaluation of capability is provided. An application for creating control charts and monitoring the quality of the product is included.

Page generated in 0.0505 seconds