Spelling suggestions: "subject:"conlinear degression"" "subject:"conlinear aregression""
111 |
Studies on bikeability in a metropolitan area using the active commuting route environment scale (ACRES)Wahlgren, Lina January 2011 (has links)
Background: The Active Commuting Route Environment Scale (ACRES) was developed to study active commuters’ perceptions of their route environments. The overall aims were to assess the measuring properties of the ACRES and study active bicycle commuters’ perceptions of their commuting route environments. Methods: Advertisement- and street-recruited bicycle commuters from Greater Stockholm, Sweden, responded to the ACRES. Expected differences between inner urban and suburban route environments were used to assess criterion-related validity, together with ratings from an assembled expert panel as well as existing objective measures. Reliability was assessed as test-retest reproducibility. Comparisons of ratings between advertisement- and street-recruited participants were used for assessments of representativity. Ratings of inner urban and suburban route environments were used to evaluate commuting route environment profiles. Simultaneous multiple linear regression analyses were used to assess the relation between the outcome variable: whether the route environment hinders or stimulates bicycle-commuting and environmental predictors, such as levels of exhaust fumes, speeds of traffic and greenery, in inner urban areas. Results: The ACRES was characterized by considerable criterion-related validity and reasonable test-retest reproducibility. There was a good correspondence between the advertisement- and street-recruited participants’ ratings. Distinct differences in commuting route environment profiles between the inner urban and suburban areas were noted. Suburban route environments were rated as safer and more stimulating for bicycle-commuting. Beautiful, green and safe route environments seem to be, independently of each other, stimulating factors for bicycle-commuting in inner urban areas. On the other hand, high levels of exhaust fumes and traffic congestion, as well as low ‘directness’ of the route, seem to be hindering factors. Conclusions: The ACRES is useful for assessing bicyclists’ perceptions of their route environments. A number of environmental factors related to the route appear to be stimulating or hindering for bicycle commuting. The overall results demonstrate a complex research area at the beginning of exploration. / BAKGRUND: Färdvägsmiljöer kan tänkas påverka människors fysiskt aktiva arbetspendling och därmed bidra till bättre folkhälsa. Studier av färdvägsmiljöer är därför önskvärda för att öka förståelsen kring möjliga samband mellan fysiskt aktiv arbetspendling och färdvägsmiljöer. En enkät, ”The Active Commuting Route Environment Scale” (ACRES), har därför skapats i syfte att studera fysiskt aktiva arbetspendlares upplevelser av sina färdvägsmiljöer. Huvudsyftet med denna avhandling var dels att studera enkätens psykometriska egenskaper i form av validitet och reliabilitet, dels att studera arbetspendlande cyklisters upplevelser av sina färdvägsmiljöer. METODER: Arbetspendlande cyklister från Stor-Stockholm rekryterades via tidningsannonsering och via direkt kontakt i anslutning till färdvägen. Deltagarna besvarade enkäten ACRES. Tillsammans med skattningar från en grupp av experter och redan existerande objektiva mått användes förväntade skillnader mellan färdvägsmiljöer i inner- och ytterstaden för att studera kriterierelaterad validitet. Reliabiliteten studerades som reproducerbarhet via upprepade mätningar (test-retest). Jämförelser mellan skattningar av deltagare rekryterade via annonsering och via direkt kontakt i färdvägsmiljöer användes för att studera representativitet. Skattningar av färdvägsmiljöer i inner- och ytterstaden användes vidare för att studera färdvägsmiljöprofiler. Multipel linjär regressionsanalys användes även för att studera sambandet mellan utfallsvariabeln huruvida färdvägsmiljön motverkar eller stimulerar arbetspendling med cykel och miljöprediktorer, såsom avgasnivåer, trafikens hastighet och grönska, i innerstadsmiljöer. RESULTAT: Enkäten ACRES visade god kriterierelaterad validitet och rimlig reproducerbarhet. Det var en god överrensstämmelse mellan skattningar av deltagare rekryterade via annonsering och via direkt kontakt. Färdvägsmiljöprofilerna visade tydliga skillnader mellan inner- och ytterstadsmiljöer. Ytterstadens färdvägsmiljöer skattades som tryggare och mer stimulerande för arbetspendling med cykel än innerstadens färdvägsmiljöer. Vidare verkar vackra, gröna och trygga färdvägsmiljöer, oberoende av varandra, vara stimulerade faktorer för arbetspendling med cykel i innerstadsmiljöer. Däremot verkar höga avgasnivåer, höga trängselnivåer och färdvägar som kräver många riktningsändringar vara motverkande faktorer. SLUTSATSER: Enkäten ACRES är ett användbart instrument vid mätningar av cyklisters upplevelser av sina färdvägsmiljöer. Ett antal faktorer relaterade till färdvägsmiljön verkar vara stimulerande respektive motverkande för arbetspendling med cykel. Generellt sett på visar resultaten ett relativt outforskat och komplext forskningsområde. / Faap-projektet "Fysiskt aktiv arbetspendling i Stor-Stockholm"
|
112 |
Exact Markov chain Monte Carlo and Bayesian linear regressionBentley, Jason Phillip January 2009 (has links)
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
|
113 |
Ανάλυση διασποράς και παλινδρόμησης με εφαρμογέςΚαμπέλη, Πετρούλα 20 September 2010 (has links)
Στη διπλωματική εργασία περιγράφονται και αναπτύσονται δύο στατιστικές μέθοδοι ανάλυσης δεδομένων, Γραμμική Παλινδρόμηση με ποιοτικές μεταβλητές και Ανάλυση διασποράς με έναν και ακολούθως με δύο παράγοντες. Στη συνέχεια οι παραπάνω μέθοδοι εφαρμόζονται σε πραγματικά δεδομένα που προέρχονται από δείγματα νερού ενός κολπίσκου και μελετάται ο βαθμός επίδρασης 3 διαφορετικών βροχοπτώσεων στο pH του νερού. Η εφαρμογή των μεθόδων γίνεται με τη χρήση του στατιστικού πακέτου SPSS. / The thesis described and developed the data analysis of two statistical methods, Linear Regression with qualitative variables and ANOVA one-way analysis, then ANOVA two-way. Moreover, the former methods are applied to real data from gulf water samples and studied the degree of influence of 3 different rainfalls in the water pH. The application of the methods is done using the SPSS statistical package.
|
114 |
A complex networks approach to designing resilient system-of-systemsTran, Huy T. 07 January 2016 (has links)
This thesis develops a methodology for designing resilient system-of-systems (SoS) networks. This methodology includes a capability-based resilience assessment framework, used to quantify SoS resilience. A complex networks approach is used to generate potential SoS network designs, focusing on scale-free and random network topologies, degree-based and random rewiring adaptation, and targeted and random node removal threats. Statistical design methods, specifically response surface methodology, are used to evaluate SoS networks and provide an understanding of the advantages and disadvantages of potential designs. Linear regression is used to model a continuous representation of the network design space, and determine optimally resilient networks for particular threat types.
The methodology is applied to an information exchange (IE) network model (i.e., a message passing network model) and military command and control (C2) model. Results show that optimally resilient IE network topologies are random for networks with adaptation, regardless of the threat type. However, the optimally resilient adaptation method sharply transitions from being fully random to fully degree-based as threat randomness increases. These findings suggest that intermediately defined networks should not be considered when designing for resilience. Cost-benefit analysis of C2 networks suggests that resilient C2 networks are more cost-effective than robust ones, as long as the cost of rewiring network links is less than three-fourths the cost of creating new links. This result identifies a threshold for which a resilient network design approach is more cost-effective than a robust one.This thesis develops a methodology for designing resilient system-of-systems (SoS) networks. This methodology includes a capability-based resilience assessment framework, used to quantify SoS resilience. A complex networks approach is used to generate potential SoS network designs, focusing on scale-free and random network topologies, degree-based and random rewiring adaptation, and targeted and random node removal threats. Statistical design methods, specifically response surface methodology, are used to evaluate SoS networks and provide an understanding of the advantages and disadvantages of potential designs. Linear regression is used to model a continuous representation of the network design space, and determine optimally resilient networks for particular threat types.
The methodology is applied to an information exchange (IE) network model (i.e., a message passing network model) and military command and control (C2) model. Results show that optimally resilient IE network topologies are random for networks with adaptation, regardless of the threat type. However, the optimally resilient adaptation method sharply transitions from being fully random to fully degree-based as threat randomness increases. These findings suggest that intermediately defined networks should not be considered when designing for resilience. Cost-benefit analysis of C2 networks suggests that resilient C2 networks are more cost-effective than robust ones, as long as the cost of rewiring network links is less than three-fourths the cost of creating new links. This result identifies a threshold for which a resilient network design approach is more cost-effective than a robust one.
|
115 |
Régression linéaire bayésienne sur données fonctionnelles / Functional Bayesian linear regressionGrollemund, Paul-Marie 22 November 2017 (has links)
Un outil fondamental en statistique est le modèle de régression linéaire. Lorsqu'une des covariables est une fonction, on fait face à un problème de statistique en grande dimension. Pour conduire l'inférence dans cette situation, le modèle doit être parcimonieux, par exemple en projetant la covariable fonctionnelle dans des espaces de plus petites dimensions.Dans cette thèse, nous proposons une approche bayésienne nommée Bliss pour ajuster le modèle de régression linéaire fonctionnel. Notre modèle, plus précisément la distribution a priori, suppose que la fonction coefficient est une fonction en escalier. A partir de la distribution a posteriori, nous définissons plusieurs estimateurs bayésiens, à choisir suivant le contexte : un estimateur du support et deux estimateurs, un lisse et un estimateur constant par morceaux. A titre d'exemple, nous considérons un problème de prédiction de la production de truffes noires du Périgord en fonction d'une covariable fonctionnelle représentant l'évolution des précipitations au cours du temps. En terme d'impact sur les productions, la méthode Bliss dégage alors deux périodes de temps importantes pour le développement de la truffe.Un autre atout du paradigme bayésien est de pouvoir inclure de l'information dans la loi a priori, par exemple l'expertise des trufficulteurs et des biologistes sur le développement de la truffe. Dans ce but, nous proposons deux variantes de la méthode Bliss pour prendre en compte ces avis. La première variante récolte de manière indirecte l'avis des experts en leur proposant de construire des données fictives. La loi a priori correspond alors à la distribution a posteriori sachant ces pseudo-données.En outre, un système de poids relativise l'impact de chaque expert ainsi que leurs corrélations. La seconde variante récolte explicitement l'avis des experts sur les périodes de temps les plus influentes sur la production et si cet l'impact est positif ou négatif. La construction de la loi a priori repose alors sur une pénalisation des fonctions coefficients en contradiction avec ces avis.Enfin, ces travaux de thèse s'attachent à l'analyse et la compréhension du comportement de la méthode Bliss. La validité de l'approche est justifiée par une étude asymptotique de la distribution a posteriori. Nous avons construit un jeu d'hypothèses spécifique au modèle Bliss, pour écrire une démonstration efficace d'un théorème de Wald. Une des difficultés est la mauvaise spécification du modèle Bliss, dans le sens où la vraie fonction coefficient n'est sûrement pas une fonction en escalier. Nous montrons que la loi a posteriori se concentre autour d'une fonction coefficient en escalier, obtenue par projection au sens de la divergence de Kullback-Leibler de la vraie fonction coefficient sur un ensemble de fonctions en escalier. Nous caractérisons cette fonction en escalier à partir du design et de la vraie fonction coefficient. / The linear regression model is a common tool for a statistician. If a covariable is a curve, we tackle a high-dimensional issue. In this case, sparse models lead to successful inference, for instance by expanding the functional covariate on a smaller dimensional space.In this thesis, we propose a Bayesian approach, named Bliss, to fit the functional linear regression model. The Bliss model supposes, through the prior, that the coefficient function is a step function. From the posterior, we propose several estimators to be used depending on the context: an estimator of the support and two estimators of the coefficient function: a smooth one and a stewpise one. To illustrate this, we explain the black Périgord truffle yield with the rainfall during the truffle life cycle. The Bliss method succeeds in selecting two relevant periods for truffle development.As another feature of the Bayesian paradigm, the prior distribution enables the integration of preliminary judgments in the statistical inference. For instance, the biologists’ knowledge about the truffles growth is relevant to inform the Bliss model. To this end, we propose two modifications of the Bliss model to take into account preliminary judgments. First, we indirectly collect preliminary judgments using pseudo data provided by experts. The prior distribution proposed corresponds to the posterior distribution given the experts’ pseudo data. Futhermore, the effect of each expert and their correlations are controlled with weighting. Secondly, we collect experts’ judgments about the most influential periods effecting the truffle yield and if the effect is positive or negative. The prior distribution proposed relies on a penalization of coefficient functions which do not conform to these judgments.Lastly, the asymptotic behavior of the Bliss method is studied. We validate the proposed approach by showing the posterior consistency of the Bliss model. Using model-specific assumptions, efficient proof of the Wald theorem is given. The main difficulty is the misspecification of the model since the true coefficient function is surely not a step function. We show that the posterior distribution contracts on a step function which is the Kullback-Leibler projection of the true coefficient function on a set of step functions. This step function is derived from the true parameter and the design.
|
116 |
Estudo da cinética das reações de hidrodesnitrogenação.FERNANDES, Thalita Cristine Ribeiro Lucas. 16 October 2018 (has links)
Submitted by Maria Medeiros (maria.dilva1@ufcg.edu.br) on 2018-10-16T12:44:41Z
No. of bitstreams: 1
THALITA CRISTINE RIBEIRO LUCAS FERNANDES - DISSERTAÇÃO (PPGEQ) 2017.pdf: 3155230 bytes, checksum: 29997db242a362fb2a65ffda6fcf8ba0 (MD5) / Made available in DSpace on 2018-10-16T12:44:41Z (GMT). No. of bitstreams: 1
THALITA CRISTINE RIBEIRO LUCAS FERNANDES - DISSERTAÇÃO (PPGEQ) 2017.pdf: 3155230 bytes, checksum: 29997db242a362fb2a65ffda6fcf8ba0 (MD5)
Previous issue date: 2017-09-27 / CNPq / A hidrodesnitrogenação catalítica é um processo utilizado para remover impurezas de nitrogênio em produtos derivados de petróleo e ocorre mediante o tratamento da carga com hidrogênio a temperatura e pressão elevadas em um reator do tipo tricled-bed. Para otimizar as operações nestes reatores, é necessário que se tenha informações sobre a cinética das várias reações de hidrodesnitrogenação. Entretanto, as equações das taxas das reações não estão disponíveis na literatura. Assim, o objetivo deste trabalho consiste em obter as equações das taxas das reações e os parâmetros cinéticos para a rede reacional dos compostos nitrogenados utilizando o modelo rigoroso de hidrodesnitrogenação do Aspen Hysys como base numérica para as simulações. Experimentos numéricos foram realizados em um reator diferencial no software Aspen Hysys para obter dados de concentração de reagentes e produtos a diferentes alimentações. Diferentes métodos foram utilizados, um método de regressão linear multivariável para obtenção dos coeficientes de regressão, um método de metamodelagem interpoladora estocástica, o Kriging e a otimização do metamodelo Kriging utilizando o método dos mínimos quadrados. Para testar as metodologias propostas, todas as etapas foram aplicadas para um sistema de duas reações simples, uma reversível e outra irreversível, em um reator PFR. Os resultados referentes ao método de regressão linear mostraram que a metodologia pode ser utilizada para estimar parâmetros cinéticos desde que se conheça a equação da taxa correspondente. A comparação entre os dois métodos do tipo Kriging propostos (convencional e otimizado) foi feita a partir de técnicas de análise estatísticas, como o coeficiente de determinação R² e análise de variância (ANOVA). O kriging otimizado mostrou uma melhor aderência aos dados quando comparado com o kriging convencional. / Catalytic hydrodenitrogenation is one process used to remove nitrogen impurities from refinery streams, and it occurs by reacting a given charge with hydrogen at high temperature and pressure in a trickled-bed reactor. In order to optimize the operation of such reactors one needs information about the kinetics of the various hydrodenitrogenation reactions. However, reaction rate expressions are not available in the open literature. Therefore, this work aims at obtaining the reaction rate expressions and parameters for the reaction network of nitrogen compounds using the rigorous hydrodenitrogenation model in Aspen Hysys as the numerical basis for simulations. A differential reactor to simulate the process for different feed streams generated data to estimate of concentration of reagent and products at different feed loads. Three different methods were used, a multivariable linear regression model to obtain the regression coefficients, a stochastic interpolator metamodeling, Kriging and an optimized Kriging with least square method. In a first step, two simple reactions rates were used to test the methodologies in a reactor PFR in Hysys, a reversible and an irreversible. The results showed that linear regression might be use to estimate parameters satisfactory only if you know the reaction rate expression. By using statistical analysis as determination coefficient R² and analyze of variance, ANOVA, it was possible to compare both Krigings (conventional and optimized). Optimized Kriging showed a better adherence to the data when compared to conventional kriging.
|
117 |
Improved use of abattoir information to aid the management of liver fluke in cattleMazeri, Stella January 2017 (has links)
Fasciolosis, caused by the trematode parasite Fasciola hepatica, is a multi-host parasitic disease affecting many countries worldwide. It is a well-recognized clinically and economically important disease of food producing animals such as cattle and sheep. In the UK, the incidence and distribution of fasciolosis has been increasing in the last decade while the timing of acute disease is becoming more variable and the season suitable for parasite development outside the mammalian host has been extended. Meanwhile control is proving increasingly difficult due to changing weather conditions, increased animal movements and developing anthelmintic resistance. Forecasting models have been around for a long time to aid health planning related to fasciolosis control, but studies identifying management related risk factors are limited. Moreover, the lack of information on the accuracy of meat inspection and available liver fluke diagnostic tests hinders effective monitoring of disease prevalence and treatment. So far, the evaluation of tests available for the diagnosis of the infection in cattle has mainly been carried out using gold standard approaches or under experimental settings, the limitations of which are well known. In cattle, the infection mainly manifests as a sub-clinical disease, resulting in indirect production losses, which are difficult to estimate. The lack of obvious clinical signs results in these losses commonly being attributed to other causes such as poor weather conditions or bad quality forage. This further undermines establishment of appropriate control strategies, as it is difficult to convince farmers to treat without demonstrating clear economic losses of sub-clinical disease. This project explores the value of slaughterhouse data in understanding the changing epidemiology of fasciolosis, identifying sustainable control measures and estimating the effect of infection on production parameters using data collected at one of the largest cattle and sheep abattoirs in Scotland. Data used in this study include; a) abattoir data routinely collected during 2013 and 2014, b) data collected during 3 periods of abattoir based sampling, c) data collected through administration of a management questionnaire and d) climatic and environmental data from various online sources. A Bayesian extension of the Hui Walter no gold standard model was used to estimate the diagnostic sensitivity and specificity of five diagnostic tests for fasciolosis in cattle, which were applied on 619 samples collected from the abattoir during three sampling periods; summer 2013, winter 2014 and autumn 2014. The results provided novel information on the performance of these tests in a naturally infected cattle population at different times of the year. Meat inspection was estimated to have a sensitivity of 0.68 (95% BCI 0.61-0.75) and a specificity of 0.88 (95% BCI 0.85-0.91). Accurate estimates of sensitivity and specificity will allow for routine abattoir liver inspection to be used as a tool for monitoring the epidemiology of F. hepatica as well as evaluating herd health planning. Linear regression modelling was used to estimate the delay in reaching slaughter weight in beef cattle infected with F. hepatica, accounting for other important factors such as weight, age, sex, breed and farm as a random effect. The model estimated that cattle classified as having fluke based on routine liver inspection had on average 10 (95% CI 9-12) days greater slaughter age, assuming an average carcass weight of 345 kg. Furthermore, estimates from a second model indicated that the increase in age at slaughter was more severe for higher fibrosis scores. More precisely, the increase in slaughter age was 34 (95% CI 11-57) days for fibrosis score of 1, 93 (95% CI 57-128) days for fibrosis score 2 and 78 (95% CI 30-125) days for fibrosis score 3. Similarly, in a third model comparing different burden categories with animals with no fluke burden, there was a 31 (95% CI 7-56) days increase in slaughter age for animals with 1 to 10 parasites and 77 (95% CI 32-124) days increase in animals with more than 10 parasites found in their livers. Lastly, a multi-variable mixed effects logistic regression model was built to estimate the association between climate, environmental, management and animal specific factors and the risk of an animal being infected by F. hepatica. Multiple imputation methodology was employed to deal with missing data arising from skipped questions in the questionnaire. Results of the regression model confirmed the importance of temperature, rainfall and cattle movements in increasing the risk for fasciolosis, while it indicated that the presence of deer can increase the risk of infection and that male cattle have a reduced risk of infection. Overall, this project has used slaughterhouse data to fill important knowledge gaps regarding F. hepatica infection in cattle. It has provided valuable information on the accuracy of routine abattoir meat inspection, as well as other diagnostic tests. It has also provided estimates of the effect of infection on the time cattle take to reach slaughter weight at different levels of infection and identified relevant risk factors related to the infection. In conclusion, knowledge of the effect of infection on slaughter age, as well as regional risk factors for F. hepatica infection, along with an improved use of abattoir inspection results in the evaluation of treatment strategies, can provide farmers and veterinarians with better incentives and tools to improve their herd health strategies and in the longer term help reduce the incidence of liver fluke in cattle.
|
118 |
Establishing predictive validity for oral passage reading fluency and vocabulary curriculum-based measures (CBMs) for sixth grade studentsMegert, Brian R. 06 1900 (has links)
xiii, 92 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number. / In recent years, state and national policy created the need for higher accountability standards for student academic performance. This increased accountability creates an imperative to have a formative assessment system reflecting validity in inferences about the effectiveness of instruction and performance on statewide large-scale assessments. Curriculum-based measurement (CBM) satisfies both functions. However, research shows the predictive power of oral passage reading fluency (PRF) diminishes in middle and high school. Because of the decreased predictive validity of PRF in the upper grade levels, additional reading CBMs should be explored. This study compares PRF and Vocabulary CBM data for all sixth grade students in a school district using two statistical procedures: correlation and regression. The correlation coefficients were moderately high among PRF, Vocabulary CBM, and the Reading test in Oregon Assessment of Knowledge and Skills (OAKS). A regression analysis indicated that the Vocabulary CBM explained more variance than PRF in predicting reading performance on OAKS. A second multiple regression analysis introduced three non-performance indicators (Gender, Attendance, and NCLB At-Risk), along with the two CBMs (Vocabulary and PRF). The second regression results revealed that Vocabulary again was more predictive than PRF, Gender, Attendance, or NCLB At-Risk. At-Risk status was the only non-performance indicator that was significant. All the findings have been discussed within the context of understanding reading skills using CBMs and their relation to performance on a large-scale test used for accountability. The findings have been framed as part of an information system that allows schools and districts to better tailor staffing, instruction, and schedules to student needs. Suggestions for future research also have been discussed, particularly in enhancing the predictions on large-scale test outcomes using a variety of CBMs. / Committee in charge: Gerald Tindal, Chairperson, Educational Methodology, Policy, and Leadership;
Paul Yovanoff, Member, Educational Methodology, Policy, and Leadership;
Keith Hollenbeck, Member, Educational Methodology, Policy, and Leadership;
Jean Stockard, Outside Member, Planning Public Policy & Mgmt
|
119 |
Detecção e diagnóstico de falhas baseado em modelos empíricos no subespaço das variáveis de processo (EMPVSUB)Bastidas, Maria Eugenia Hidalgo January 2018 (has links)
O escopo desta dissertação é o desenvolvimento de uma metodologia para a detecção e diagnóstico de falhas em processos industriais baseado em modelos empíricos no subespaço das variáveis do processo com expansão não linear das bases. A detecção e o diagnóstico de falhas são fundamentais para aumentar a segurança, confiabilidade e lucratividade de processos industriais. Métodos qualitativos, quantitativos e baseados em dados históricos do processo têm sido estudados amplamente. Para demonstrar as vantagens da metodologia proposta, ela será comparada com duas metodologias consideradas padrão, uma baseada em Análise de Componentes Principais (PCA) e a outra baseada em Mínimos Quadrados Parciais (PLS). Dois estudos de casos são empregados nessa comparação. O primeiro consiste em um tanque de aquecimento com mistura e o segundo contempla o estudo de caso do processo da Tennessee Eastman. As vantagens da metodologia proposta consistem na redução da dimensionalidade dos dados a serem usados para um diagnóstico adequado, além de detectar efetivamente a anormalidade e identificar as variáveis mais relacionadas à falha, permitindo um melhor diagnóstico. Além disso, devido à expansão das bases dos modelos é possível trabalhar efetivamente com sistemas não lineares, através de funções polinomiais e exponenciais dentro do modelo. Adicionalmente o trabalho contém uma metodologia de validação dos resultados da metodologia proposta, que consiste na eliminação das variáveis do melhor modelo obtido pelos Modelos Empíricos, através do método Backward Elimination. A metodologia proposta forneceu bons resultados na área do diagnóstico de falhas: conseguiu-se uma grande diminuição da dimensionalidade nos sistemas estudados em até 93,55%, bem como uma correta detecção de anormalidades e permitiu a determinação das variáveis mais relacionadas às anormalidades do processo. As comparações feitas com as metodologias padrões permitiram demonstrar que a metodologia proposta tem resultados superiores, pois consegue detectar as anormalidades em um espaço dimensional reduzido, detectando comportamentos não lineares e diminuindo incertezas. / Fault detection and diagnosis are critical to increasing the safety, reliability, and profitability of industrial processes. Qualitative and quantitative methods and process historical data have been extensively studied. This article proposes a methodology for fault detection and diagnosis, based on historical data of processes and the creation of empirical models with the expansion of nonlinear bases (polynomial and exponential bases) and regularization techniques. To demonstrate the advantages of the proposed approach, it is compared with two standard methodologies: Principal Components Analysis (PCA) and the Partial Least Squares (PLS), performed in two case studies: a mixed heating tank and the Tennessee Eastman Process. The advantages of the proposed methodology are the reduction of the dimensionality of the data used, in addition to the effective detection of abnormalities, identifying the variables most related to the fault. Furthermore, the work contains a methodology to validate the diagnosis results consisting of variable elimination from the best empirical models with the Backward Elimination algorithm. The proposed methodology achieved a promising performance, since it can decrease the dimensionality of the studied systems up to 93.55%, reducing uncertainties, and capturing nonlinear behaviors.
|
120 |
Quantitative Research on the Return of Private Seasoned Equity Offerings: Evidence from ChinaJanuary 2017 (has links)
abstract: This paper quantitatively analyses the relation between the return of private
seasoned equity offerings and variables of market and firm characteristics in China Ashare
market. A multiple-factor linear regression model is constructed to estimate this
relation and the result canhelp investors to determine the future return of private
placement stocks.
In this paper, I first review past theories about private placement stocks, including how
the large shareholder participation, the discount of private offerings, the firm
characteristics, and the investment on firm value will affect the return of private
offerings.
According to the past literature, I propose four main factors that may affect the
return of private placement. They are the large shareholders participation in private
placement; the discount that private placement could offer; the characteristics of the
companies that offer a private placement and the intrinsic value of such companies. I
adopt statistic and correlational analysis to test the impact of each factor. Then,
according to this single-factor analysis, I set up a multiple-factor linear regression model
on private seasoned equity offerings return in Chapter Four.
In the last two chapters, I apply this quantitative model to other fields. I use this
model to testify current financial products of private placement and develop investmen
strategies on stocks with private seasoned equity offerings in secondary market. My
quantitative strategy is useful according to the result of setback test. / Dissertation/Thesis / Doctoral Dissertation Business Administration 2017
|
Page generated in 0.0737 seconds