41 |
Global Demand Forecast ModelAlsalous, Osama 19 January 2016 (has links)
Air transportation demand forecasting is a core element in aviation planning and policy decision making. NASA Langley Research Center addressed the need of a global forecast model to be integrated into the Transportation Systems Analysis Model (TSAM) to fulfil the vision of the Aeronautics Research Mission Directorate (ARMD) at NASA Headquarters to develop a picture of future demand worldwide. Future forecasts can be performed using a range of techniques depending on the data available and the scope of the forecast. Causal models are widely used as a forecasting tool by looking for relationships between historical demand and variables such as economic and population growth. The Global Demand Model is an econometric regression model that predicts the number of air passenger seats worldwide using the Gross Domestic Product (GDP), population, and airlines market share as the explanatory variables. GDP and Population are converted to 2.5 arc minute individual cell resolution and calculated at the airport level in the geographic area 60 nautical miles around the airport. The global demand model consists of a family of models, each airport is assigned the model that best fits the historical data. The assignment of the model is conducted through an algorithm that uses the R2 as the measure of Goodness-of-Fit in addition to a sanity check for the generated forecasts. The output of the model is the projection of the number of seats offered at each airport for every year up to the year 2040. / Master of Science
|
42 |
An Intrusion Detection System for Battery Exhaustion Attacks on Mobile ComputersNash, Daniel Charles 15 June 2005 (has links)
Mobile personal computing devices continue to proliferate and individuals' reliance on them for day-to-day needs necessitate that these platforms be secure. Mobile computers are subject to a unique form of denial of service attack known as a battery exhaustion attack, in which an attacker attempts to rapidly drain the battery of the device. Battery exhaustion attacks greatly reduce the utility of the mobile devices by decreasing battery life. If steps are not taken to thwart these attacks, they have the potential to become as widespread as the attacks that are currently mounted against desktop systems.
This thesis presents steps in the design of an intrusion detection system for detecting these attacks, a system that takes into account the performance, energy, and memory constraints of mobile computing devices. This intrusion detection system uses several parameters, such as CPU load and disk accesses, to estimate the power consumption of two test systems using multiple linear regression models, allowing us to find the energy used on a per process basis, and thus identifying processes that are potentially battery exhaustion attacks. / Master of Science
|
43 |
Optimal one and two-stage designs for the logistic regression modelLetsinger, William C. II 13 February 2009 (has links)
Binary response data is often modeled using the logistic regression model, a well known nonlinear model. Designing an optimal experiment for this nonlinear situation poses some problems not encountered with a linear model. The application of several optimality design criteria to the logistic regression model is explored, and many resulting optimal designs are given. The implementation of these optimal designs requires the parameters of the model to be known. However, the model parameters are not known. If they were, there would be no need to design an experiment. Consequently the parameters must be estimated prior to implementing a design.
Standard one-stage optimal designs are quite sensitive to parameter misspecification and are therefore unsatisfactory in practice. A two-stage Bayesian design procedure is developed which effectively deals with poor parameter knowledge while maintaining high efficiency. The first stage makes use of Bayesian design as well as Bayesian estimation in order to cope with parameter misspecification. Using the parameter estimates from the first stage, the second stage conditionally optimizes a chosen design optimality criterion. Asymptotically, the two-stage design procedure is considerably more efficient than the one-stage design when the parameters are misspecified and only slightly less efficient when the parameters are known. The superiority of the two-stage procedure over the one-stage is even more evident for small samples. / Ph. D.
|
44 |
Prevalence of Chronic Diseases and Risk Factors for Death among Elderly AmericansHan, Guangming 14 July 2011 (has links)
The main aim of this study is to explore the effects of risk factors contributing to death in the elderly American population. To achieve this purpose, we constructed Cox proportional hazard regression models and logistic regression models with the complex survey dataset from the national Second Longitudinal Study of Aging (LSOA II) to calculate the hazard ratios (HR)/odds ratios (OR) and confidence interval (CI) of risk factors. Our results show that in addition to chronic disease conditions, many risk factors, such as demographic factors (gender and age), social factors (interaction with friends or relatives), personal health behaviors (smoking and exercise), and biomedical factors (Body mass index and emotional factors) have significant effects on death in the elderly American population. This will provide important information for elderly people to prolong lifespan regardless of whether they have chronic disease/diseases or not.
|
45 |
Monte Carlo Examination of Static and Dynamic Student t Regression ModelsPaczkowski, Remi 07 January 1998 (has links)
This dissertation examines a number of issues related to Static and Dynamic Student t Regression Models.
The Static Student t Regression Model is derived and transformed to an operational form. The operational form is then examined in a series of Monte Carlo experiments. The model is judged based on its usefulness for estimation and testing and its ability to model the heteroskedastic conditional variance. It is also compared with the traditional Normal Linear Regression Model.
Subsequently the analysis is broadened to a dynamic setup. The Student t Autoregressive Model is derived and a number of its operational forms are considered. Three forms are selected for a detailed examination in a series of Monte Carlo experiments. The models’ usefulness for estimation and testing is evaluated, as well as their ability to model the conditional variance. The models are also compared with the traditional Dynamic Linear Regression Model. / Ph. D.
|
46 |
Einführung in die ÖkonometrieHuschens, Stefan 30 March 2017 (has links) (PDF)
Die Kapitel 1 bis 6 im ersten Teil dieses Skriptes beruhen auf einer Vorlesung Ökonometrie I, die zuletzt im WS 2001/02 gehalten wurde, die Kapitel 7 bis 16 beruhen auf einer Vorlesung Ökonometrie II, die zuletzt im SS 2006 gehalten wurde. Das achte Kapitel enthält eine komprimierte Zusammenfassung der Ergebnisse aus dem Teil Ökonometrie I.
|
47 |
Bayesian models for DNA microarray data analysisLee, Kyeong Eun 29 August 2005 (has links)
Selection of signi?cant genes via expression patterns is important in a microarray problem. Owing to small sample size and large number of variables (genes), the selection process can be unstable. This research proposes a hierarchical Bayesian model for gene (variable) selection. We employ latent variables in a regression setting and use a Bayesian mixture prior to perform the variable selection. Due to the binary nature of the data, the posterior distributions of the parameters are not in explicit form, and we need to use a combination of truncated sampling and Markov Chain Monte Carlo (MCMC) based computation techniques to simulate the posterior distributions. The Bayesian model is ?exible enough to identify the signi?cant genes as well as to perform future predictions. The method is applied to cancer classi?cation via cDNA microarrays. In particular, the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the method is used to identify the set of signi?cant genes to classify BRCA1 and others. Microarray data can also be applied to survival models. We address the issue of how to reduce the dimension in building model by selecting signi?cant genes as well as assessing the estimated survival curves. Additionally, we consider the wellknown Weibull regression and semiparametric proportional hazards (PH) models for survival analysis. With microarray data, we need to consider the case where the number of covariates p exceeds the number of samples n. Speci?cally, for a given vector of response values, which are times to event (death or censored times) and p gene expressions (covariates), we address the issue of how to reduce the dimension by selecting the responsible genes, which are controlling the survival time. This approach enables us to estimate the survival curve when n << p. In our approach, rather than ?xing the number of selected genes, we will assign a prior distribution to this number. The approach creates additional ?exibility by allowing the imposition of constraints, such as bounding the dimension via a prior, which in e?ect works as a penalty. To implement our methodology, we use a Markov Chain Monte Carlo (MCMC) method. We demonstrate the use of the methodology with (a) di?use large B??cell lymphoma (DLBCL) complementary DNA (cDNA) data and (b) Breast Carcinoma data. Lastly, we propose a mixture of Dirichlet process models using discrete wavelet transform for a curve clustering. In order to characterize these time??course gene expresssions, we consider them as trajectory functions of time and gene??speci?c parameters and obtain their wavelet coe?cients by a discrete wavelet transform. We then build cluster curves using a mixture of Dirichlet process priors.
|
48 |
Bayesian models for DNA microarray data analysisLee, Kyeong Eun 29 August 2005 (has links)
Selection of signi?cant genes via expression patterns is important in a microarray problem. Owing to small sample size and large number of variables (genes), the selection process can be unstable. This research proposes a hierarchical Bayesian model for gene (variable) selection. We employ latent variables in a regression setting and use a Bayesian mixture prior to perform the variable selection. Due to the binary nature of the data, the posterior distributions of the parameters are not in explicit form, and we need to use a combination of truncated sampling and Markov Chain Monte Carlo (MCMC) based computation techniques to simulate the posterior distributions. The Bayesian model is ?exible enough to identify the signi?cant genes as well as to perform future predictions. The method is applied to cancer classi?cation via cDNA microarrays. In particular, the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the method is used to identify the set of signi?cant genes to classify BRCA1 and others. Microarray data can also be applied to survival models. We address the issue of how to reduce the dimension in building model by selecting signi?cant genes as well as assessing the estimated survival curves. Additionally, we consider the wellknown Weibull regression and semiparametric proportional hazards (PH) models for survival analysis. With microarray data, we need to consider the case where the number of covariates p exceeds the number of samples n. Speci?cally, for a given vector of response values, which are times to event (death or censored times) and p gene expressions (covariates), we address the issue of how to reduce the dimension by selecting the responsible genes, which are controlling the survival time. This approach enables us to estimate the survival curve when n << p. In our approach, rather than ?xing the number of selected genes, we will assign a prior distribution to this number. The approach creates additional ?exibility by allowing the imposition of constraints, such as bounding the dimension via a prior, which in e?ect works as a penalty. To implement our methodology, we use a Markov Chain Monte Carlo (MCMC) method. We demonstrate the use of the methodology with (a) di?use large B??cell lymphoma (DLBCL) complementary DNA (cDNA) data and (b) Breast Carcinoma data. Lastly, we propose a mixture of Dirichlet process models using discrete wavelet transform for a curve clustering. In order to characterize these time??course gene expresssions, we consider them as trajectory functions of time and gene??speci?c parameters and obtain their wavelet coe?cients by a discrete wavelet transform. We then build cluster curves using a mixture of Dirichlet process priors.
|
49 |
Comparison between Weibull and Cox proportional hazards modelsCrumer, Angela Maria January 1900 (has links)
Master of Science / Department of Statistics / James J. Higgins / The time for an event to take place in an individual is called a survival time. Examples include the time that an individual survives after being diagnosed with a terminal illness or the time that an electronic component functions before failing. A popular parametric model for this type of data is the Weibull model, which is a flexible model that allows for the inclusion of covariates of the survival times. If distributional assumptions are not met or cannot be verified, researchers may turn to the semi-parametric Cox proportional hazards model. This model also allows for the inclusion of covariates of survival times but with less restrictive assumptions. This report compares estimates of the slope of the covariate in the proportional hazards model using the parametric Weibull model and the semi-parametric Cox proportional hazards model to estimate the slope. Properties of these models are discussed in Chapter 1. Numerical examples and a comparison of the mean square errors of the estimates of the slope of the covariate for various sample sizes and for uncensored and censored data are discussed in Chapter 2. When the shape parameter is known, the Weibull model far out performs the Cox proportional hazards model, but when the shape parameter is unknown, the Cox proportional hazards model and the Weibull model give comparable results.
|
50 |
A station-level analysis of rail transit ridership in AustinYang, Qiqian 30 September 2014 (has links)
Community and Regional Planning / In the past two decades, Austin has tremendous population growth, job opportunity in the downtown core and transportation challenges associated with that. Public transit, and particularly rail, often is regarded as a strategy to help reduce urban traffic congestion. The Urban Rail, which combines features of streetcars and light rail, is introduced into Austin as a new transit rail. The City of Austin, Capital Metro and Lone Star Rail are actively studying routing, financial, environmental and community elements associated with a first phase of Urban Rail.
This thesis collected 2010 Origin and Destination Rail Transit Survey data from Capital Metropolitan Transportation Authority. The research focuses on the rail transit ridership. Two regression models are applied to analyze the factors influencing Austin rail transit ridership. One model is focusing on the socioeconomic characteristics. One model is focusing on the spatial factors.
Our model shows that demographic factors have more significant effect than spatial factors.
In addition, this work also tries to analyze the correlations between those factors and make recommendations based on the analysis result. / text
|
Page generated in 0.0492 seconds