Spelling suggestions: "subject:"[een] ORDINAL"" "subject:"[enn] ORDINAL""
111 |
ON THE STRUCTURE OF GAMES AND THEIR POSETSSiegel, Angela Annette 21 April 2011 (has links)
This thesis explores the structure of games, including both the internal structure of various games and also the structure of classes of games as partially ordered sets. Internal structure is explored through consideration of juxtapositions of game positions and how the underlying games interact. We look at ordinal sums and introduce side-sums as a means of understanding this interaction, giving a full solution to a Toppling Dominoes variant through its application. Loopy games in which only one player is allowed a pass move, referred to as Oslo games, are introduced and their game structure explored. The poset of Oslo games is shown to form a distributive lattice. The Oslo forms of Wythoff’s game, Grundy’s game and octal .007 are introduced and full solutions given. Finally, the poset of option-closed games is given up to day 3 and all are shown to form a planar lattice. The option-closed game of Cricket Pitch is also fully analyzed.
|
112 |
Development of Wastewater Collection Network Asset Database, Deterioration Models and Management FrameworkYounis, Rizwan January 2010 (has links)
The dynamics around managing urban infrastructure are changing dramatically. Today???s infrastructure management challenges ??? in the wake of shrinking coffers and stricter stakeholders??? requirements ??? include finding better condition assessment tools and prediction models, and effective and intelligent use of hard-earn data to ensure the sustainability of urban infrastructure systems. Wastewater collection networks ??? an important and critical component of urban infrastructure ??? have been neglected, and as a result, municipalities in North America and other parts of the world have accrued significant liabilities and infrastructure deficits. To reduce cost of ownership, to cope with heighten accountability, and to provide reliable and sustainable service, these systems need to be managed in an effective and intelligent manner.
The overall objective of this research is to present a new strategic management framework and related tools to support multi-perspective maintenance, rehabilitation and replacement (M, R&R) planning for wastewater collection networks. The principal objectives of this research include:
(1) Developing a comprehensive wastewater collection network asset database consisting of high quality condition assessment data to support the work presented in this thesis, as well as, the future research in this area.
(2) Proposing a framework and related system to aggregate heterogeneous data from municipal wastewater collection networks to develop better understanding of their historical and future performance.
(3) Developing statistical models to understand the deterioration of wastewater pipelines.
(4) To investigate how strategic management principles and theories can be applied to effectively manage wastewater collection networks, and propose a new management framework and related system.
(5) Demonstrating the application of strategic management framework and economic principles along with the proposed deterioration model to develop long-term financial sustainability plans for wastewater collection networks.
A relational database application, WatBAMS (Waterloo Buried Asset Management System), consisting of high quality data from the City of Niagara Falls wastewater collection system is developed. The wastewater pipelines??? inspections were completed using a relatively new Side Scanner and Evaluation Technology camera that has advantages over the traditional Closed Circuit Television cameras. Appropriate quality assurance and quality control procedures were developed and adopted to capture, store and analyze the condition assessment data. To aggregate heterogeneous data from municipal wastewater collection systems, a data integration framework based on data warehousing approach is proposed. A prototype application, BAMS (Buried Asset Management System), based on XML technologies and specifications shows implementation of the proposed framework. Using wastewater pipelines condition assessment data from the City of Niagara Falls wastewater collection network, the limitations of ordinary and binary logistic regression methodologies for deterioration modeling of wastewater pipelines are demonstrated. Two new empirical models based on ordinal regression modeling technique are proposed. A new multi-perspective ??? that is, operational/technical, social/political, regulatory, and finance ??? strategic management framework based on modified balanced-scorecard model is developed. The proposed framework is based on the findings of the first Canadian National Asset Management workshop held in Hamilton, Ontario in 2007. The application of balanced-scorecard model along with additional management tools, such as strategy maps, dashboard reports and business intelligence applications, is presented using data from the City of Niagara Falls. Using economic principles and example management scenarios, application of Monte Carlo simulation technique along with the proposed deterioration model is presented to forecast financial requirements for long-term M, R&R plans for wastewater collection networks.
A myriad of asset management systems and frameworks were found for transportation infrastructure. However, to date few efforts have been concentrated on understanding the performance behaviour of wastewater collection systems, and developing effective and intelligent M, R&R strategies. Incomplete inventories, and scarcity and poor quality of existing datasets on wastewater collection systems were found to be critical and limiting issues in conducting research in this field. It was found that the existing deterioration models either violated model assumptions or assumptions could not be verified due to limited and questionable quality data. The degradation of Reinforced Concrete pipes was found to be affected by age, whereas, for Vitrified Clay pipes, the degradation was not age dependent. The results of financial simulation model show that the City of Niagara Falls can save millions of dollars, in the long-term, by following a pro-active M, R&R strategy.
The work presented in this thesis provides an insight into how an effective and intelligent management system can be developed for wastewater collection networks. The proposed framework and related system will lead to the sustainability of wastewater collection networks and assist municipal public works departments to proactively manage their wastewater collection networks.
|
113 |
Some Aspects on Confirmatory Factor Analysis of Ordinal Variables and Generating Non-normal DataLuo, Hao January 2011 (has links)
This thesis, which consists of five papers, is concerned with various aspects of confirmatory factor analysis (CFA) of ordinal variables and the generation of non-normal data. The first paper studies the performances of different estimation methods used in CFA when ordinal data are encountered. To take ordinality into account the four estimation methods, i.e., maximum likelihood (ML), unweighted least squares, diagonally weighted least squares, and weighted least squares (WLS), are used in combination with polychoric correlations. The effect of model sizes and number of categories on the parameter estimates, their standard errors, and the common chi-square measure of fit when the models are both correct and misspecified are examined. The second paper focuses on the appropriate estimator of the polychoric correlation when fitting a CFA model. A non-parametric polychoric correlation coefficient based on the discrete version of Spearman's rank correlation is proposed to contend with the situation of non-normal underlying distributions. The simulation study shows the benefits of using the non-parametric polychoric correlation under conditions of non-normality. The third paper raises the issue of simultaneous factor analysis. We study the effect of pooling multi-group data on the estimation of factor loadings. Given the same factor loadings but different factor means and correlations, we investigate how much information is lost by pooling the groups together and only estimating the combined data set using the WLS method. The parameter estimates and their standard errors are compared with results obtained by multi-group analysis using ML. The fourth paper uses a Monte Carlo simulation to assess the reliability of the Fleishman's power method under various conditions of skewness, kurtosis, and sample size. Based on the generated non-normal samples, the power of D'Agostino's (1986) normality test is studied. The fifth paper extends the evaluation of algorithms to the generation of multivariate non-normal data. Apart from the requirement of generating reliable skewness and kurtosis, the generated data also need to possess the desired correlation matrices. Four algorithms are investigated in terms of simplicity, generality, and reliability of the technique.
|
114 |
Readjusting Historical Credit Ratings : using Ordered Logistic Regression and Principal ComponentAnalysisCronstedt, Axel, Andersson, Rebecca January 2018 (has links)
Readjusting Historical Credit Ratings using Ordered Logistic Re-gression and Principal Component Analysis The introduction of the Basel II Accord as a regulatory document for creditrisk presented new concepts of credit risk management and credit risk mea-surements, such as enabling international banks to use internal estimates ofprobability of default (PD), exposure at default (EAD) and loss given default(LGD). These three measurements is the foundation of the regulatory capitalcalculations and are all in turn based on the bank’s internal credit ratings. Ithas hence been of increasing importance to build sound credit rating modelsthat possess the capability to provide accurate measurements of the credit riskof borrowers. These statistical models are usually based on empirical data andthe goodness-of-fit of the model is mainly depending on the quality and sta-tistical significance of the data. Therefore, one of the most important aspectsof credit rating modeling is to have a sufficient number of observations to bestatistically reliable, making the success of a rating model heavily dependenton the data collection and development state.The main purpose of this project is to, in a simple but efficient way, createa longer time series of homogeneous data by readjusting the historical creditrating data of one of Svenska Handelsbanken AB’s credit portfolios. Thisreadjustment is done by developing ordered logistic regression models thatare using independent variables consisting of macro economic data in separateways. One model uses macro economic variables compiled into principal com-ponents, generated through a Principal Component Analysis while all othermodels uses the same macro economic variables separately in different com-binations. The models will be tested to evaluate their ability to readjust theportfolio as well as their predictive capabilities. / Justering av historiska kreditbetyg med hjälp av ordinal logistiskregression och principialkomponentsanalys När Basel II implementerades introducerades även nya riktlinjer för finan-siella instituts riskhantering och beräkning av kreditrisk, så som möjlighetenför banker att använda interna beräkningar av Probability of Default (PD),Exposure at Default (EAD) och Loss Given Default (LGD), som tillsammansgrundar sig i varje låntagares sannoliket för fallissemang. Dessa tre mått ut-gör grunden för beräkningen av de kapitaltäckningskrav som banker förväntasuppfylla och baseras i sin tur på bankernas interna kreditratingsystem. Detär därmed av stor vikt för banker att bygga stabila kreditratingmodeller medkapacitet att generera pålitliga beräkningar av motparternas kreditrisk. Dessamodeller är vanligtvis baserade på empirisk data och modellens goodness-of-fit,eller passning till datat, beror till stor del på kvalitén och den statistiska sig-nifikansen hos det data som står till förfogande. Därför är en av de viktigasteaspekterna för kreditratingsmodeller att ha tillräckligt många observationeratt träna modellen på, vilket gör modellens utvecklingsskede samt mängdendata avgörande för modellens framgång.Huvudsyftet med detta projekt är att, på ett enkelt och effektivt sätt, skapaen längre, homogen tidsserie genom att justera historisk kreditratingdata i enportfölj med företagslån tillhandahållen av Svenska Handelsbanken AB. Jus-teringen görs genom att utveckla olika ordinala logistiska regressionsmodellermed beroende variabler bestående av makroekonomiska variabler, på olikasätt. En av modellerna använder makroekonomiska variabler i form av princi-palkomponenter skapade med hjälp av en principialkomponentsanalys, medande andra modelelrna använder de makroekonomiska variablerna enskilt i olikakombinationer. Modellerna testas för att utvärdera både deras förmåga attjustera portföljens historiska kreditratings samt för att göra prediktioner.
|
115 |
Souřadnicové měřicí stroje (CMM) s optickým snímacím systémem a optické CMM / Coordinate measuring machinesPalásek, Vítězslav January 2009 (has links)
This graduation work is about co-ordinal measuring machines (CMM) with optical reader system and optical CMM. In accordance with submission and in terms of survival well known contactless systems the direction is to make methodology objective classification of rating with granting summary of these systems. The first part contains optical principles for visual scanner in CMM and optical CMM. Ist is described there principle of contactless obtaining steric digital version of measured object with using laser and optical facilities. The second part contains brief makes survey of these contactless systems and their component which are used for co-ordinal measuring. The survey is devided into optical readers, which are put on CMM brake or on mobile measuring brake and on optical CMM, which localize position measuring/sensing head in the space or they read measured object from specific distance – fotogrammetric. Characteristic of offered systems and chart with technical data are mentioned with each maker. The third and fourth part is about submission methodology for objective choice suitable sort of reader, optical CMM in target of characteristic reader system quality. And from this methodology is given the choice of suitable exponent stationary measuring machine with contact and contactless way of reading and their comparison.
|
116 |
Analýza a predikce výsledků ligových utkání / Analysis and prediction of league games resultsŠimsa, Filip January 2015 (has links)
The thesis is devoted to an analysis of ice hockey matches results in the highest Czech league competition in seasons 1999/2000 to 2014/2015 and to prediction of the following matches. We describe and apply Kalman filter theory where forms of teams represent an unobservable state vector and results of matches serve as measurements. Goal differences are identified as a suitable transformation of a match result. They are used as a dependent variable in a linear regression to find significant predictors. For a prediction of a match result we construct an ordinal model with those predictors. By using generalized Gini coefficient, we compare a diversifica- tion power of this model with betting odds, which are offered by betting companies. At the end, we combine knowledge of odds before a match with other predictors to make a prediction model. This model is used to identify profitable bets. 1
|
117 |
Regression Analysis for Ordinal Outcomes in Matched Study Design: Applications to Alzheimer's Disease StudiesAustin, Elizabeth 09 July 2018 (has links) (PDF)
Alzheimer's Disease (AD) affects nearly 5.4 million Americans as of 2016 and is the most common form of dementia. The disease is characterized by the presence of neurofibrillary tangles and amyloid plaques [1]. The amount of plaques are measured by Braak stage, post-mortem. It is known that AD is positively associated with hypercholesterolemia [16]. As statins are the most widely used cholesterol-lowering drug, there may be associations between statin use and AD. We hypothesize that those who use statins, specifically lipophilic statins, are more likely to have a low Braak stage in post-mortem analysis.
In order to address this hypothesis, we wished to fit a regression model for ordinal outcomes (e.g., high, moderate, or low Braak stage) using data collected from the National Alzheimer's Coordinating Center (NACC) autopsy cohort. As the outcomes were matched on the length of follow-up, a conditional likelihood-based method is often used to estimate the regression coefficients. However, it can be challenging to solve the conditional-likelihood based estimating equation numerically, especially when there are many matching strata. Given that the likelihood of a conditional logistic regression model is equivalent to the partial likelihood from a stratified Cox proportional hazard model, the existing R function for a Cox model, coxph( ), can be used for estimation of a conditional logistic regression model. We would like to investigate whether this strategy could be extended to a regression model for ordinal outcomes.
More specifically, our aims are to (1) demonstrate the equivalence between the exact partial likelihood of a stratified discrete time Cox proportional hazards model and the likelihood of a conditional logistic regression model, (2) prove equivalence, or lack there-of, between the exact partial likelihood of a stratified discrete time Cox proportional hazards model and the conditional likelihood of models appropriate for multiple ordinal outcomes: an adjacent categories model, a continuation-ratio model, and a cumulative logit model, and (3) clarify how to set up stratified discrete time Cox proportional hazards model for multiple ordinal outcomes with matching using the existing coxph( ) R function and interpret the regression coefficient estimates that result. We verified this theoretical proof through simulation studies. We simulated data from the three models of interest: an adjacent categories model, a continuation-ratio model, and a cumulative logit model. We fit a Cox model using the existing coxph( ) R function to the simulated data produced by each model. We then compared the coefficient estimates obtained. Lastly, we fit a Cox model to the NACC dataset. We used Braak stage as the outcome variables, having three ordinal categories. We included predictors for age at death, sex, genotype, education, comorbidities, number of days having taken lipophilic statins, number of days having taken hydrophilic statins, and time to death. We matched cases to controls on the length of follow up. We have discussed all findings and their implications in detail.
|
118 |
Prediction of Bronchopulmonary Dysplasia by a Priori and Longitudinal Risk Factors in Extremely Premature InfantsPax, Benjamin M. 01 June 2018 (has links)
No description available.
|
119 |
Semiparametric Bayesian Approach using Weighted Dirichlet Process Mixture For Finance Statistical ModelsSun, Peng 07 March 2016 (has links)
Dirichlet process mixture (DPM) has been widely used as exible prior in nonparametric Bayesian literature, and Weighted Dirichlet process mixture (WDPM) can be viewed as extension of DPM which relaxes model distribution assumptions. Meanwhile, WDPM requires to set weight functions and can cause extra computation burden. In this dissertation, we develop more efficient and exible WDPM approaches under three research topics. The first one is semiparametric cubic spline regression where we adopt a nonparametric prior for error terms in order to automatically handle heterogeneity of measurement errors or unknown mixture distribution, the second one is to provide an innovative way to construct weight function and illustrate some decent properties and computation efficiency of this weight under semiparametric stochastic volatility (SV) model, and the last one is to develop WDPM approach for Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) model (as an alternative approach for SV model) and propose a new model evaluation approach for GARCH which produces easier-to-interpret result compared to the canonical marginal likelihood approach.
In the first topic, the response variable is modeled as the sum of three parts. One part is a linear function of covariates that enter the model parametrically. The second part is an additive nonparametric model. The covariates whose relationships to response variable are unclear will be included in the model nonparametrically using Lancaster and Šalkauskas bases. The third part is error terms whose means and variance are assumed to follow non-parametric priors. Therefore we denote our model as dual-semiparametric regression because we include nonparametric idea for both modeling mean part and error terms. Instead of assuming all of the error terms follow the same prior in DPM, our WDPM provides multiple candidate priors for each observation to select with certain probability. Such probability (or weight) is modeled by relevant predictive covariates using Gaussian kernel. We propose several different WDPMs using different weights which depend on distance in covariates. We provide the efficient Markov chain Monte Carlo (MCMC) algorithms and also compare our WDPMs to parametric model and DPM model in terms of Bayes factor using simulation and empirical study.
In the second topic, we propose an innovative way to construct weight function for WDPM and apply it to SV model. SV model is adopted in time series data where the constant variance assumption is violated. One essential issue is to specify distribution of conditional return. We assume WDPM prior for conditional return and propose a new way to model the weights. Our approach has several advantages including computational efficiency compared to the weight constructed using Gaussian kernel. We list six properties of this proposed weight function and also provide the proof of them. Because of the additional Metropolis-Hastings steps introduced by WDPM prior, we find the conditions which can ensure the uniform geometric ergodicity of transition kernel in our MCMC. Due to the existence of zero values in asset price data, our SV model is semiparametric since we employ WDPM prior for non-zero values and parametric prior for zero values.
On the third project, we develop WDPM approach for GARCH type model and compare different types of weight functions including the innovative method proposed in the second topic. GARCH model can be viewed as an alternative way of SV for analyzing daily stock prices data where constant variance assumption does not hold. While the response variable of our SV models is transformed log return (based on log-square transformation), GARCH directly models the log return itself. This means that, theoretically speaking, we are able to predict stock returns using GARCH models while this is not feasible if we use SV model. Because SV models ignore the sign of log returns and provides predictive densities for squared log return only. Motivated by this property, we propose a new model evaluation approach called back testing return (BTR) particularly for GARCH. This BTR approach produces model evaluation results which are easier to interpret than marginal likelihood and it is straightforward to draw conclusion about model profitability by applying this approach. Since BTR approach is only applicable to GARCH, we also illustrate how to properly cal- culate marginal likelihood to make comparison between GARCH and SV. Based on our MCMC algorithms and model evaluation approaches, we have conducted large number of model fittings to compare models in both simulation and empirical study. / Ph. D.
|
120 |
Exploring the Correlation Between Reading Ability and Mathematical Ability : KTH Master thesis reportSol, Richard, Rasch, Alexander January 2023 (has links)
Reading and mathematics are two essential subjects for academic success and cognitive development. Several studies show a correlation between the reading ability and mathematical ability of pupils (Korpershoek et al., 2015; Ní Ríordáin & O’Donoghue, 2009; Reikerås, 2006; Walker et al., 2008). The didactical part of this thesis presents a study investigating a correlation between reading ability and mathematical ability among pupils in upper secondary schools in Sweden. This study collaborated with Lexplore AB to use machine learning and eye-tracking to measure reading ability. Mathematical ability was measured with Mathematics 1c grades and Stockholmsprovet, which is a diagnostic mathematics test. Although no correlation was found, there are several insights about selection and measures following the result that may improve future studies on the subject. This thesis finds that the result could have been affected by a biased selection of the participants. This thesis also suggests that the measure through machine learning and eye-tracking used in the study may not fully capture the concept of reading ability as defined in previous studies. The technological aspect of this thesis focuses on modifying and improving the model used to calculate users’ reading ability scores. As the model’s estimation tends to plateau after the fifth year of compulsory school, the study aims to maintain the same level of progression observed before this point. Previous research indicates that silent reading, being unconstrained by vocalization, is faster than reading aloud. To address this progression flattening, a grid search algorithm was employed to adjust hyperparameters and assign appropriate weight to silent and aloud reading. The findings emphasize that reading aloud should be prioritized in the weighted average and the corresponding hyperparameters adjusted accordingly. Furthermore, gathering more data for older pupils can improve the machine learning model by accounting for individual reading strategies. Introducing different word complexity factors can also enhance the model’s performance. / Läsning och matematik är två avgörande ämnen för akademisk framgång och kognitiv utveckling. Flera studier visar på ett samband mellan elevers läsförmåga och matematiska förmåga (Korpershoek et al., 2015; Ní Ríordáin & O’Donoghue, 2009; Reikerås, 2006; Walker et al., 2008). Den didaktiska delen av denna rapport presenterar en studie som undersöker sambandet mellan läsförmåga och matematisk förmåga hos elever på gymnasiet i Sverige. Studien samarbetade med Lexplore AB för att använda maskininlärning och ögonspårning för att mäta läsförmåga. Matematisk förmåga mättes genom matematikbetyg och Stockholms provet, som är ett diagnostiskt matematiktest. Trotsatt inget samband hittades uppges insikter om urvalet och åtgärder som kan förbättra framtida studier i ämnet. Rapporten konstaterar att resultatet kan ha påverkats avett sned vridet urval av deltagare. Dessutom föreslår rapporten att mätningen genom maskininlärning och ögonspårning som användes i studien kanske inte helt fångar upp begreppet läsförmåga som används i tidigare studier. Teknikdelen av denna rapport fokuserar på att modifiera och förbättra modellen som används för att beräkna användarnas läsförmågepoäng. Eftersom modellens uppskattning tenderar att avplattas efter femte året i grundskola, syftar studien till att bibehålla samma nivå av progression som observerats före denna punkt. Tidigare forskning indikerar att tyst läsning, som inte begränsas av att uttala orden, är snabbare än högläsning. För att adressera denna avplattning av progression användes en rutnätssöknings-algoritm för att justera hyperparametrar och tilldela rätt viktning åt tyst läsning. Resultaten betonar att högläsning bör prioriteras i viktade medelvärdet och att motsvarande justeringar av hyperparametrar bör implementeras. Dessutom kan insamling av mer data för äldre elever förbättra maskininlärningsmodellen genom att ta hänsyn till individuella lässtrategier. Införandet av olika faktorer för textkomplexitet kan också förbättra modellens prestanda.
|
Page generated in 0.0487 seconds