Spelling suggestions: "subject:"degression model"" "subject:"degression godel""
51 |
A station-level analysis of rail transit ridership in AustinYang, Qiqian 30 September 2014 (has links)
Community and Regional Planning / In the past two decades, Austin has tremendous population growth, job opportunity in the downtown core and transportation challenges associated with that. Public transit, and particularly rail, often is regarded as a strategy to help reduce urban traffic congestion. The Urban Rail, which combines features of streetcars and light rail, is introduced into Austin as a new transit rail. The City of Austin, Capital Metro and Lone Star Rail are actively studying routing, financial, environmental and community elements associated with a first phase of Urban Rail.
This thesis collected 2010 Origin and Destination Rail Transit Survey data from Capital Metropolitan Transportation Authority. The research focuses on the rail transit ridership. Two regression models are applied to analyze the factors influencing Austin rail transit ridership. One model is focusing on the socioeconomic characteristics. One model is focusing on the spatial factors.
Our model shows that demographic factors have more significant effect than spatial factors.
In addition, this work also tries to analyze the correlations between those factors and make recommendations based on the analysis result. / text
|
52 |
Porovnání prediktorů / Comparing of predictorsJelínková, Hana January 2011 (has links)
No description available.
|
53 |
Robust mixture regression model fitting by Laplace distributionXing, Yanru January 1900 (has links)
Master of Science / Department of Statistics / Weixing Song / A robust estimation procedure for mixture linear regression models is proposed in this
report by assuming the error terms follow a Laplace distribution. EM algorithm is imple-
mented to conduct the estimation procedure of missing information based on the fact that
the Laplace distribution is a scale mixture of normal and a latent distribution. Finite sample
performance of the proposed algorithm is evaluated by some extensive simulation studies,
together with the comparisons made with other existing procedures in this literature. A
sensitivity study is also conducted based on a real data example to illustrate the application of the proposed method.
|
54 |
Empirical minimum distance lack-of-fit tests for Tobit regression modelsZhang, Yi January 1900 (has links)
Master of Science / Department of Statistics / Weixing Song / The purpose of this report is to propose and evaluate two lack-of-fit test procedures to check the adequacy of the regression functional forms in the standard Tobit regression models. It is shown that testing the null hypothesis for the standard Tobit regression models amounts testing a new equivalent null hypothesis of the classic regression models. Both procedures are constructed based on the empirical variants of a minimum distance, which measures the squared difference between a nonparametric estimator and a parametric estimator of the regression functions fitted under the null hypothesis for the new regression models. The asymptotic null distributions of the test statistics are investigated, as well as the power for some fixed alternatives and some local hypotheses. Simulation studies are conducted to assess the finite sample power performance and the robustness of the tests. Comparisons between these two test procedures are also made.
|
55 |
Regressão binária bayesiana com o uso de variáveis auxiliares / Bayesian binary regression models using auxiliary variablesFarias, Rafael Braz Azevedo 27 April 2007 (has links)
A inferência Bayesiana está cada vez mais dependente de algoritmos de simulação estocástica, e sua eficiência está diretamente relacionada à eficiência do algoritmo considerado. Uma prática bastante utilizada é a introdução de variáveis auxiliares para obtenção de formas conhecidas para as distribuições {\\it a posteriori} condicionais completas, as quais facilitam a implementação do amostrador de Gibbs. No entanto, a introdução dessas variáveis pode produzir algoritmos onde os valores simulados são fortemente correlacionados, fato esse que prejudica a convergência. O agrupamento das quantidades desconhecidas em blocos, de tal maneira que seja viável a simulação conjunta destas quantidades, é uma alternativa para redução da autocorrelação, e portanto, ajuda a melhorar a eficiência do procedimento de simulação. Neste trabalho, apresentamos propostas de simulação em blocos no contexto de modelos de regressão binária com o uso de variáveis auxiliares. Três classes de funções de ligação são consideradas: probito, logito e probito-assimétrico. Para as duas primeiras apresentamos e implementamos as propostas de atualização conjunta feitas por Holmes e Held (2006). Para a ligação probito-assimétrico propomos quatro diferentes maneiras de construir os blocos, e comparamos estes algoritmos através de duas medidas de eficiência (distância média Euclidiana entre atualizações e tamanho efetivo da amostra). Concluímos que os algoritmos propostos são mais eficientes que o convencional (sem blocos), sendo que um deles proporcionou ganho superior a 160\\% no tamanho efetivo da amostra. Além disso, discutimos uma etapa bastante importante da modelagem, denominada análise de resíduos. Nesta parte adaptamos e implementamos os resíduos propostos para a ligação probito para os modelos logístico e probito-assimétrico. Finalmente, utilizamos os resíduos propostos para verificar a presença de observações discrepantes em um conjunto de dados simulados. / The Bayesian inference is getting more and more dependent of stochastic simulation algorithms, and its efficiency is directly related with the efficiency of the considered algorithm. The introduction of auxiliary variables is a technique widely used for attainment of the full conditional distributions, which facilitate the implementation of the Gibbs sampling. However, the introduction of these auxiliary variables can produce algorithms with simulated values highly correlated, this fact harms the convergence. The grouping of the unknow quantities in blocks, in such way that the joint simulation of this quantities is possible, is an alternative for reduction of the autocorrelation, and therefore, improves the efficiency of the simulation procedure. In this work, we present proposals of simulation using the Gibbs block sampler in the context of binary response regression models using auxiliary variables. Three class of links are considered: probit, logit and skew-probit. For the two first we present and implement the scheme of joint update proposed by Holmes and Held (2006). For the skew-probit, we consider four different ways to construct the blocks, and compare these algorithms through two measures of efficiency (the average Euclidean update distance between interactions and effective sample size). We conclude that the considered algorithms are more efficient than the conventional (without blocks), where one of these leading to around 160\\% improvement in the effective sample size. Moreover, we discuss one important stage of the modelling, called residual analysis. In this part we adapt and implement residuals considered in the probit model for the logistic and skew-probit models. For a simulated data set we detect the presence of outlier used the residuals proposed here for the different models.
|
56 |
Estudo da estabilidade da reação industrial de formação de óxido de etileno a partir do gerenciamento das variáveis críticas de processo. / Stability study of ethylene oxide industrial reaction from the management of critical process variables.Ribeiro, Luciano Gonçalves 03 October 2013 (has links)
O desempenho de um processo de produção de óxido de etileno é normalmente avaliado através da seletividade da reação. Neste trabalho, uma unidade produtiva foi estudada com o objetivo de se maximizar a seletividade através da atuação sobre as principais variáveis de processo. Uma análise estatística de um conjunto de dados de processo mostrou que quatro variáveis (vazão de oxigênio, vazão de gás de reciclo, temperatura da reação e teor de clorados) são as de maior influência sobre a seletividade e explicam mais de 60% das variações ocorridas no processo produtivo. Com base nessa análise de dados, modelos de regressão multilinear foram desenvolvidos e testados com o objetivo de representar o comportamento do processo em função apenas do comportamento dessas quatro variáveis. O modelo matemático empírico proposto para representar esse processo foi validado estatisticamente e fenomenologicamente, demonstrando consistência com os dados obtidos em processo. O modelo também foi desdobrado em 24 submodelos que representam condições possíveis de operação da unidade e para os quais foram elaboradas superfícies de respostas que permitiram definir a melhor forma de gestão das 4 variáveis críticas conjuntamente, de modo a se obter a máxima seletividade possível para a reação em função desses cenários operacionais. / The performance of an ethylene oxide manufacturing process is normally measured by the selectivity reaction. In this work, a production unit was studied in order to maximize selectivity through the development of a strategic plan to main to manage the key process variables. A statistical analysis of a data set indicated that only four variables (oxygen flow, recycle gas flow, temperature reaction and chlorine content) are responsible for the greater influence over the selectivity and explain more than 60% of process variations. As consequence, regression models were developed and tested in order to represent the process behavior as a function of these four variables. The proposed mathematical model was statistically and phenomenologically validated, demonstrating consistency with the current process data. The model was rewritten in 24 sub-models, named deployed models which represent possible operational conditions of the unit. A set of surface responses was defined for each deployed model, providing to identify the best way for the management of these 4 critical variables. Furthermore, this analysis leads to a management tool for achieving the best results in selectivity, as function of the possible operational scenarios for this unit.
|
57 |
Resident Student Perceptions of On-Campus Living and Study Environments at the University of Namibia and their Relation to Academic PerformanceNeema, Isak 29 April 2003 (has links)
This study measures resident student perceptions of on-campus living and study environments at the University of Namibia campus residence and their relation to student academic performance. Data were obtained from a stratified random sample of resident students with hostels (individual dormitory) as strata. Student academic performance was measured by grade point average obtained from the university registrar. Student perceptions of living and study environments were obtained from a survey. Inferences were made from the sample to the population concerning: student perceptions of the adequacy of the library and campus safety, and differences in perceptions between students living in old-style and new-style hostels. To relate student perceptions to academic performance, a model regressing GPA on student perception variables was constructed. The principal findings of the analyses were that (1) Student perceptions do not differ between old and new hostels; (2) There is an association between time spent in the hostel and the type of room, ability to study in room during the day and the type of room, ability to study in room at night and the type of room, time spent in hostel and number of times student change blocks, ability to study in room at night and availability of study desk in room, ability to study in room at night and availability of study lamp in room, effectiveness of UNAM security personnel and safety studying at classes at night and also between effectiveness of UNAM security personnel and student perception on whether security on campus should remain unchanged respectively; (3) Mean GPA differs with respect to the type of room, ability to study in room during the day, time spent in hostel, number of times student change blocks, current year of study, time spent on study, students who are self-catering, sufficiency of water supply in blocks and also with students who are enrolled in Law and B.Commerce field of study and with students receiving financial support in the form of loans. (4) The variables found to be significant in the regression model were Law field of study, double rooms, inability to study in room during the day and self-catering respectively.
|
58 |
Análise de contagens multivariadas. / Multivariate count analysis.Ho, Linda Lee 15 September 1995 (has links)
Este trabalho apresenta uma análise estatística de contagens multivariadas proveniente de várias populações através de modelos de regressão. Foram considerados casos onde os vetores respostas obedeçam às distribuições Poisson multivariada e Poisson log-normal multivariada. Esta distribuição admite correlação de ambos sinais entre componentes do vetor resposta, enquanto que as distribuições mais usuais para dados de contagens (como a Poisson multivariada) admitem apenas correlação positiva entre as componentes do vetor resposta. São discutidos métodos de estimação e testes de hipóteses sobre os parâmetros do modelo para o caso bivariado. Estes modelos de regressão foram aplicados a um conjunto de dados referentes a contagens de dois tipos de defeitos em 100 gramas de fibras têxteis de quatro máquinas craqueadeiras, sendo duas de um fabricante e as outras de um segundo fabricante. Os resultados obtidos nos diferentes modelos de regressão foram comparados. Para estudar o comportamento das estimativas dos parâmetros de uma distribuição Poisson Log-Normal, amostras foram simuladas segundo esta distribuição. / Regression models are presented to analyse multivariate counts from many populations. Due to the random vector characteristic, we consider two classes of probability models: Multivariate Poisson distribution and Multivariate Poisson Log-Normal distribution. The last distribution admits negative and positive correlations between two components of a random vector under study, while other distributions (as Multivariate Poisson) admit only positive correlation. Estimation methods and test of hypothese on the parameters in bivariate case are discussed. The proposed techniques are illustrated by numerical examples, considering counts of two types of defects in 100g of textile fibers produced by four machines, two from one manufacturer and the other two from another one. The results from different regression models are compared. The empirical distribution of Poisson Log-Normal parameter estimations are studied by simulated samples.
|
59 |
Novel regression models for discrete responsePeluso, Alina January 2017 (has links)
In a regression context, the aim is to analyse a response variable of interest conditional to a set of covariates. In many applications the response variable is discrete. Examples include the event of surviving a heart attack, the number of hospitalisation days, the number of times that individuals benefit of a health service, and so on. This thesis advances the methodology and the application of regression models with discrete response. First, we present a difference-in-differences approach to model a binary response in a health policy evaluation framework. In particular, generalized linear mixed methods are employed to model multiple dependent outcomes in order to quantify the effect of an adopted pay-for-performance program while accounting for the heterogeneity of the data at the multiple nested levels. The results show how the policy had a positive effect on the hospitals' quality in terms of those outcomes that can be more influenced by a managerial activity. Next, we focus on regression models for count response variables. In a parametric framework, Poisson regression is the simplest model for count data though it is often found not adequate in real applications, particularly in the presence of excessive zeros and in the case of dispersion, i.e. when the conditional mean is different to the conditional variance. Negative Binomial regression is the standard model for over-dispersed data, but it fails in the presence of under-dispersion. Poisson-Inverse Gaussian regression can be used in the case of over-dispersed data, Generalised-Poisson regression can be employed in the case of under-dispersed data, and Conway-Maxwell Poisson regression can be employed in both cases of over- or under-dispersed data, though the interpretability of these models is ot straightforward and they are often found computationally demanding. While Jittering is the default non-parametric approach for count data, inference has to be made for each individual quantile, separate quantiles may cross and the underlying uniform random sampling can generate instability in the estimation. These features motivate the development of a novel parametric regression model for counts via a Discrete Weibull distribution. This distribution is able to adapt to different types of dispersion relative to Poisson, and it also has the advantage of having a closed form expression for the quantiles. As well as the standard regression model, generalized linear mixed models and generalized additive models are presented via this distribution. Simulated and real data applications with different type of dispersion show a good performance of Discrete Weibull-based regression models compared with existing regression approaches for count data.
|
60 |
Modely změn v ekonometrických časových řadách / Models of changes in econometric time sequencesStrejc, Petr January 2012 (has links)
This paper is concerned with change-point detection in parameters of econometric regression models when a training set of data without any change is available. There are presented two well- known sequential tests - CUSUM test for linear regression model and a test based on weighted residuals for an autoregressive time series - including their asymptotical properties under certain conditions. Two asymptotically equivalent variance estimators are compared in a finite sample situation using Monte Carlo simulations. There are also presented and compared critical value approximations using different bootstrapping methods and variance estimators. Finally, the weighted residual test is applied on S&P 500 historical data.
|
Page generated in 0.0948 seconds