• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 266
  • 147
  • 42
  • 32
  • 24
  • 15
  • 13
  • 6
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 639
  • 639
  • 209
  • 125
  • 114
  • 90
  • 88
  • 88
  • 75
  • 68
  • 61
  • 60
  • 59
  • 57
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Exact Markov chain Monte Carlo and Bayesian linear regression

Bentley, Jason Phillip January 2009 (has links)
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
122

Ανάλυση διασποράς και παλινδρόμησης με εφαρμογές

Καμπέλη, Πετρούλα 20 September 2010 (has links)
Στη διπλωματική εργασία περιγράφονται και αναπτύσονται δύο στατιστικές μέθοδοι ανάλυσης δεδομένων, Γραμμική Παλινδρόμηση με ποιοτικές μεταβλητές και Ανάλυση διασποράς με έναν και ακολούθως με δύο παράγοντες. Στη συνέχεια οι παραπάνω μέθοδοι εφαρμόζονται σε πραγματικά δεδομένα που προέρχονται από δείγματα νερού ενός κολπίσκου και μελετάται ο βαθμός επίδρασης 3 διαφορετικών βροχοπτώσεων στο pH του νερού. Η εφαρμογή των μεθόδων γίνεται με τη χρήση του στατιστικού πακέτου SPSS. / The thesis described and developed the data analysis of two statistical methods, Linear Regression with qualitative variables and ANOVA one-way analysis, then ANOVA two-way. Moreover, the former methods are applied to real data from gulf water samples and studied the degree of influence of 3 different rainfalls in the water pH. The application of the methods is done using the SPSS statistical package.
123

A complex networks approach to designing resilient system-of-systems

Tran, Huy T. 07 January 2016 (has links)
This thesis develops a methodology for designing resilient system-of-systems (SoS) networks. This methodology includes a capability-based resilience assessment framework, used to quantify SoS resilience. A complex networks approach is used to generate potential SoS network designs, focusing on scale-free and random network topologies, degree-based and random rewiring adaptation, and targeted and random node removal threats. Statistical design methods, specifically response surface methodology, are used to evaluate SoS networks and provide an understanding of the advantages and disadvantages of potential designs. Linear regression is used to model a continuous representation of the network design space, and determine optimally resilient networks for particular threat types. The methodology is applied to an information exchange (IE) network model (i.e., a message passing network model) and military command and control (C2) model. Results show that optimally resilient IE network topologies are random for networks with adaptation, regardless of the threat type. However, the optimally resilient adaptation method sharply transitions from being fully random to fully degree-based as threat randomness increases. These findings suggest that intermediately defined networks should not be considered when designing for resilience. Cost-benefit analysis of C2 networks suggests that resilient C2 networks are more cost-effective than robust ones, as long as the cost of rewiring network links is less than three-fourths the cost of creating new links. This result identifies a threshold for which a resilient network design approach is more cost-effective than a robust one.This thesis develops a methodology for designing resilient system-of-systems (SoS) networks. This methodology includes a capability-based resilience assessment framework, used to quantify SoS resilience. A complex networks approach is used to generate potential SoS network designs, focusing on scale-free and random network topologies, degree-based and random rewiring adaptation, and targeted and random node removal threats. Statistical design methods, specifically response surface methodology, are used to evaluate SoS networks and provide an understanding of the advantages and disadvantages of potential designs. Linear regression is used to model a continuous representation of the network design space, and determine optimally resilient networks for particular threat types. The methodology is applied to an information exchange (IE) network model (i.e., a message passing network model) and military command and control (C2) model. Results show that optimally resilient IE network topologies are random for networks with adaptation, regardless of the threat type. However, the optimally resilient adaptation method sharply transitions from being fully random to fully degree-based as threat randomness increases. These findings suggest that intermediately defined networks should not be considered when designing for resilience. Cost-benefit analysis of C2 networks suggests that resilient C2 networks are more cost-effective than robust ones, as long as the cost of rewiring network links is less than three-fourths the cost of creating new links. This result identifies a threshold for which a resilient network design approach is more cost-effective than a robust one.
124

Régression linéaire bayésienne sur données fonctionnelles / Functional Bayesian linear regression

Grollemund, Paul-Marie 22 November 2017 (has links)
Un outil fondamental en statistique est le modèle de régression linéaire. Lorsqu'une des covariables est une fonction, on fait face à un problème de statistique en grande dimension. Pour conduire l'inférence dans cette situation, le modèle doit être parcimonieux, par exemple en projetant la covariable fonctionnelle dans des espaces de plus petites dimensions.Dans cette thèse, nous proposons une approche bayésienne nommée Bliss pour ajuster le modèle de régression linéaire fonctionnel. Notre modèle, plus précisément la distribution a priori, suppose que la fonction coefficient est une fonction en escalier. A partir de la distribution a posteriori, nous définissons plusieurs estimateurs bayésiens, à choisir suivant le contexte : un estimateur du support et deux estimateurs, un lisse et un estimateur constant par morceaux. A titre d'exemple, nous considérons un problème de prédiction de la production de truffes noires du Périgord en fonction d'une covariable fonctionnelle représentant l'évolution des précipitations au cours du temps. En terme d'impact sur les productions, la méthode Bliss dégage alors deux périodes de temps importantes pour le développement de la truffe.Un autre atout du paradigme bayésien est de pouvoir inclure de l'information dans la loi a priori, par exemple l'expertise des trufficulteurs et des biologistes sur le développement de la truffe. Dans ce but, nous proposons deux variantes de la méthode Bliss pour prendre en compte ces avis. La première variante récolte de manière indirecte l'avis des experts en leur proposant de construire des données fictives. La loi a priori correspond alors à la distribution a posteriori sachant ces pseudo-données.En outre, un système de poids relativise l'impact de chaque expert ainsi que leurs corrélations. La seconde variante récolte explicitement l'avis des experts sur les périodes de temps les plus influentes sur la production et si cet l'impact est positif ou négatif. La construction de la loi a priori repose alors sur une pénalisation des fonctions coefficients en contradiction avec ces avis.Enfin, ces travaux de thèse s'attachent à l'analyse et la compréhension du comportement de la méthode Bliss. La validité de l'approche est justifiée par une étude asymptotique de la distribution a posteriori. Nous avons construit un jeu d'hypothèses spécifique au modèle Bliss, pour écrire une démonstration efficace d'un théorème de Wald. Une des difficultés est la mauvaise spécification du modèle Bliss, dans le sens où la vraie fonction coefficient n'est sûrement pas une fonction en escalier. Nous montrons que la loi a posteriori se concentre autour d'une fonction coefficient en escalier, obtenue par projection au sens de la divergence de Kullback-Leibler de la vraie fonction coefficient sur un ensemble de fonctions en escalier. Nous caractérisons cette fonction en escalier à partir du design et de la vraie fonction coefficient. / The linear regression model is a common tool for a statistician. If a covariable is a curve, we tackle a high-dimensional issue. In this case, sparse models lead to successful inference, for instance by expanding the functional covariate on a smaller dimensional space.In this thesis, we propose a Bayesian approach, named Bliss, to fit the functional linear regression model. The Bliss model supposes, through the prior, that the coefficient function is a step function. From the posterior, we propose several estimators to be used depending on the context: an estimator of the support and two estimators of the coefficient function: a smooth one and a stewpise one. To illustrate this, we explain the black Périgord truffle yield with the rainfall during the truffle life cycle. The Bliss method succeeds in selecting two relevant periods for truffle development.As another feature of the Bayesian paradigm, the prior distribution enables the integration of preliminary judgments in the statistical inference. For instance, the biologists’ knowledge about the truffles growth is relevant to inform the Bliss model. To this end, we propose two modifications of the Bliss model to take into account preliminary judgments. First, we indirectly collect preliminary judgments using pseudo data provided by experts. The prior distribution proposed corresponds to the posterior distribution given the experts’ pseudo data. Futhermore, the effect of each expert and their correlations are controlled with weighting. Secondly, we collect experts’ judgments about the most influential periods effecting the truffle yield and if the effect is positive or negative. The prior distribution proposed relies on a penalization of coefficient functions which do not conform to these judgments.Lastly, the asymptotic behavior of the Bliss method is studied. We validate the proposed approach by showing the posterior consistency of the Bliss model. Using model-specific assumptions, efficient proof of the Wald theorem is given. The main difficulty is the misspecification of the model since the true coefficient function is surely not a step function. We show that the posterior distribution contracts on a step function which is the Kullback-Leibler projection of the true coefficient function on a set of step functions. This step function is derived from the true parameter and the design.
125

Estudo da cinética das reações de hidrodesnitrogenação.

FERNANDES, Thalita Cristine Ribeiro Lucas. 16 October 2018 (has links)
Submitted by Maria Medeiros (maria.dilva1@ufcg.edu.br) on 2018-10-16T12:44:41Z No. of bitstreams: 1 THALITA CRISTINE RIBEIRO LUCAS FERNANDES - DISSERTAÇÃO (PPGEQ) 2017.pdf: 3155230 bytes, checksum: 29997db242a362fb2a65ffda6fcf8ba0 (MD5) / Made available in DSpace on 2018-10-16T12:44:41Z (GMT). No. of bitstreams: 1 THALITA CRISTINE RIBEIRO LUCAS FERNANDES - DISSERTAÇÃO (PPGEQ) 2017.pdf: 3155230 bytes, checksum: 29997db242a362fb2a65ffda6fcf8ba0 (MD5) Previous issue date: 2017-09-27 / CNPq / A hidrodesnitrogenação catalítica é um processo utilizado para remover impurezas de nitrogênio em produtos derivados de petróleo e ocorre mediante o tratamento da carga com hidrogênio a temperatura e pressão elevadas em um reator do tipo tricled-bed. Para otimizar as operações nestes reatores, é necessário que se tenha informações sobre a cinética das várias reações de hidrodesnitrogenação. Entretanto, as equações das taxas das reações não estão disponíveis na literatura. Assim, o objetivo deste trabalho consiste em obter as equações das taxas das reações e os parâmetros cinéticos para a rede reacional dos compostos nitrogenados utilizando o modelo rigoroso de hidrodesnitrogenação do Aspen Hysys como base numérica para as simulações. Experimentos numéricos foram realizados em um reator diferencial no software Aspen Hysys para obter dados de concentração de reagentes e produtos a diferentes alimentações. Diferentes métodos foram utilizados, um método de regressão linear multivariável para obtenção dos coeficientes de regressão, um método de metamodelagem interpoladora estocástica, o Kriging e a otimização do metamodelo Kriging utilizando o método dos mínimos quadrados. Para testar as metodologias propostas, todas as etapas foram aplicadas para um sistema de duas reações simples, uma reversível e outra irreversível, em um reator PFR. Os resultados referentes ao método de regressão linear mostraram que a metodologia pode ser utilizada para estimar parâmetros cinéticos desde que se conheça a equação da taxa correspondente. A comparação entre os dois métodos do tipo Kriging propostos (convencional e otimizado) foi feita a partir de técnicas de análise estatísticas, como o coeficiente de determinação R² e análise de variância (ANOVA). O kriging otimizado mostrou uma melhor aderência aos dados quando comparado com o kriging convencional. / Catalytic hydrodenitrogenation is one process used to remove nitrogen impurities from refinery streams, and it occurs by reacting a given charge with hydrogen at high temperature and pressure in a trickled-bed reactor. In order to optimize the operation of such reactors one needs information about the kinetics of the various hydrodenitrogenation reactions. However, reaction rate expressions are not available in the open literature. Therefore, this work aims at obtaining the reaction rate expressions and parameters for the reaction network of nitrogen compounds using the rigorous hydrodenitrogenation model in Aspen Hysys as the numerical basis for simulations. A differential reactor to simulate the process for different feed streams generated data to estimate of concentration of reagent and products at different feed loads. Three different methods were used, a multivariable linear regression model to obtain the regression coefficients, a stochastic interpolator metamodeling, Kriging and an optimized Kriging with least square method. In a first step, two simple reactions rates were used to test the methodologies in a reactor PFR in Hysys, a reversible and an irreversible. The results showed that linear regression might be use to estimate parameters satisfactory only if you know the reaction rate expression. By using statistical analysis as determination coefficient R² and analyze of variance, ANOVA, it was possible to compare both Krigings (conventional and optimized). Optimized Kriging showed a better adherence to the data when compared to conventional kriging.
126

Improved use of abattoir information to aid the management of liver fluke in cattle

Mazeri, Stella January 2017 (has links)
Fasciolosis, caused by the trematode parasite Fasciola hepatica, is a multi-host parasitic disease affecting many countries worldwide. It is a well-recognized clinically and economically important disease of food producing animals such as cattle and sheep. In the UK, the incidence and distribution of fasciolosis has been increasing in the last decade while the timing of acute disease is becoming more variable and the season suitable for parasite development outside the mammalian host has been extended. Meanwhile control is proving increasingly difficult due to changing weather conditions, increased animal movements and developing anthelmintic resistance. Forecasting models have been around for a long time to aid health planning related to fasciolosis control, but studies identifying management related risk factors are limited. Moreover, the lack of information on the accuracy of meat inspection and available liver fluke diagnostic tests hinders effective monitoring of disease prevalence and treatment. So far, the evaluation of tests available for the diagnosis of the infection in cattle has mainly been carried out using gold standard approaches or under experimental settings, the limitations of which are well known. In cattle, the infection mainly manifests as a sub-clinical disease, resulting in indirect production losses, which are difficult to estimate. The lack of obvious clinical signs results in these losses commonly being attributed to other causes such as poor weather conditions or bad quality forage. This further undermines establishment of appropriate control strategies, as it is difficult to convince farmers to treat without demonstrating clear economic losses of sub-clinical disease. This project explores the value of slaughterhouse data in understanding the changing epidemiology of fasciolosis, identifying sustainable control measures and estimating the effect of infection on production parameters using data collected at one of the largest cattle and sheep abattoirs in Scotland. Data used in this study include; a) abattoir data routinely collected during 2013 and 2014, b) data collected during 3 periods of abattoir based sampling, c) data collected through administration of a management questionnaire and d) climatic and environmental data from various online sources. A Bayesian extension of the Hui Walter no gold standard model was used to estimate the diagnostic sensitivity and specificity of five diagnostic tests for fasciolosis in cattle, which were applied on 619 samples collected from the abattoir during three sampling periods; summer 2013, winter 2014 and autumn 2014. The results provided novel information on the performance of these tests in a naturally infected cattle population at different times of the year. Meat inspection was estimated to have a sensitivity of 0.68 (95% BCI 0.61-0.75) and a specificity of 0.88 (95% BCI 0.85-0.91). Accurate estimates of sensitivity and specificity will allow for routine abattoir liver inspection to be used as a tool for monitoring the epidemiology of F. hepatica as well as evaluating herd health planning. Linear regression modelling was used to estimate the delay in reaching slaughter weight in beef cattle infected with F. hepatica, accounting for other important factors such as weight, age, sex, breed and farm as a random effect. The model estimated that cattle classified as having fluke based on routine liver inspection had on average 10 (95% CI 9-12) days greater slaughter age, assuming an average carcass weight of 345 kg. Furthermore, estimates from a second model indicated that the increase in age at slaughter was more severe for higher fibrosis scores. More precisely, the increase in slaughter age was 34 (95% CI 11-57) days for fibrosis score of 1, 93 (95% CI 57-128) days for fibrosis score 2 and 78 (95% CI 30-125) days for fibrosis score 3. Similarly, in a third model comparing different burden categories with animals with no fluke burden, there was a 31 (95% CI 7-56) days increase in slaughter age for animals with 1 to 10 parasites and 77 (95% CI 32-124) days increase in animals with more than 10 parasites found in their livers. Lastly, a multi-variable mixed effects logistic regression model was built to estimate the association between climate, environmental, management and animal specific factors and the risk of an animal being infected by F. hepatica. Multiple imputation methodology was employed to deal with missing data arising from skipped questions in the questionnaire. Results of the regression model confirmed the importance of temperature, rainfall and cattle movements in increasing the risk for fasciolosis, while it indicated that the presence of deer can increase the risk of infection and that male cattle have a reduced risk of infection. Overall, this project has used slaughterhouse data to fill important knowledge gaps regarding F. hepatica infection in cattle. It has provided valuable information on the accuracy of routine abattoir meat inspection, as well as other diagnostic tests. It has also provided estimates of the effect of infection on the time cattle take to reach slaughter weight at different levels of infection and identified relevant risk factors related to the infection. In conclusion, knowledge of the effect of infection on slaughter age, as well as regional risk factors for F. hepatica infection, along with an improved use of abattoir inspection results in the evaluation of treatment strategies, can provide farmers and veterinarians with better incentives and tools to improve their herd health strategies and in the longer term help reduce the incidence of liver fluke in cattle.
127

Establishing predictive validity for oral passage reading fluency and vocabulary curriculum-based measures (CBMs) for sixth grade students

Megert, Brian R. 06 1900 (has links)
xiii, 92 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number. / In recent years, state and national policy created the need for higher accountability standards for student academic performance. This increased accountability creates an imperative to have a formative assessment system reflecting validity in inferences about the effectiveness of instruction and performance on statewide large-scale assessments. Curriculum-based measurement (CBM) satisfies both functions. However, research shows the predictive power of oral passage reading fluency (PRF) diminishes in middle and high school. Because of the decreased predictive validity of PRF in the upper grade levels, additional reading CBMs should be explored. This study compares PRF and Vocabulary CBM data for all sixth grade students in a school district using two statistical procedures: correlation and regression. The correlation coefficients were moderately high among PRF, Vocabulary CBM, and the Reading test in Oregon Assessment of Knowledge and Skills (OAKS). A regression analysis indicated that the Vocabulary CBM explained more variance than PRF in predicting reading performance on OAKS. A second multiple regression analysis introduced three non-performance indicators (Gender, Attendance, and NCLB At-Risk), along with the two CBMs (Vocabulary and PRF). The second regression results revealed that Vocabulary again was more predictive than PRF, Gender, Attendance, or NCLB At-Risk. At-Risk status was the only non-performance indicator that was significant. All the findings have been discussed within the context of understanding reading skills using CBMs and their relation to performance on a large-scale test used for accountability. The findings have been framed as part of an information system that allows schools and districts to better tailor staffing, instruction, and schedules to student needs. Suggestions for future research also have been discussed, particularly in enhancing the predictions on large-scale test outcomes using a variety of CBMs. / Committee in charge: Gerald Tindal, Chairperson, Educational Methodology, Policy, and Leadership; Paul Yovanoff, Member, Educational Methodology, Policy, and Leadership; Keith Hollenbeck, Member, Educational Methodology, Policy, and Leadership; Jean Stockard, Outside Member, Planning Public Policy & Mgmt
128

Detecção e diagnóstico de falhas baseado em modelos empíricos no subespaço das variáveis de processo (EMPVSUB)

Bastidas, Maria Eugenia Hidalgo January 2018 (has links)
O escopo desta dissertação é o desenvolvimento de uma metodologia para a detecção e diagnóstico de falhas em processos industriais baseado em modelos empíricos no subespaço das variáveis do processo com expansão não linear das bases. A detecção e o diagnóstico de falhas são fundamentais para aumentar a segurança, confiabilidade e lucratividade de processos industriais. Métodos qualitativos, quantitativos e baseados em dados históricos do processo têm sido estudados amplamente. Para demonstrar as vantagens da metodologia proposta, ela será comparada com duas metodologias consideradas padrão, uma baseada em Análise de Componentes Principais (PCA) e a outra baseada em Mínimos Quadrados Parciais (PLS). Dois estudos de casos são empregados nessa comparação. O primeiro consiste em um tanque de aquecimento com mistura e o segundo contempla o estudo de caso do processo da Tennessee Eastman. As vantagens da metodologia proposta consistem na redução da dimensionalidade dos dados a serem usados para um diagnóstico adequado, além de detectar efetivamente a anormalidade e identificar as variáveis mais relacionadas à falha, permitindo um melhor diagnóstico. Além disso, devido à expansão das bases dos modelos é possível trabalhar efetivamente com sistemas não lineares, através de funções polinomiais e exponenciais dentro do modelo. Adicionalmente o trabalho contém uma metodologia de validação dos resultados da metodologia proposta, que consiste na eliminação das variáveis do melhor modelo obtido pelos Modelos Empíricos, através do método Backward Elimination. A metodologia proposta forneceu bons resultados na área do diagnóstico de falhas: conseguiu-se uma grande diminuição da dimensionalidade nos sistemas estudados em até 93,55%, bem como uma correta detecção de anormalidades e permitiu a determinação das variáveis mais relacionadas às anormalidades do processo. As comparações feitas com as metodologias padrões permitiram demonstrar que a metodologia proposta tem resultados superiores, pois consegue detectar as anormalidades em um espaço dimensional reduzido, detectando comportamentos não lineares e diminuindo incertezas. / Fault detection and diagnosis are critical to increasing the safety, reliability, and profitability of industrial processes. Qualitative and quantitative methods and process historical data have been extensively studied. This article proposes a methodology for fault detection and diagnosis, based on historical data of processes and the creation of empirical models with the expansion of nonlinear bases (polynomial and exponential bases) and regularization techniques. To demonstrate the advantages of the proposed approach, it is compared with two standard methodologies: Principal Components Analysis (PCA) and the Partial Least Squares (PLS), performed in two case studies: a mixed heating tank and the Tennessee Eastman Process. The advantages of the proposed methodology are the reduction of the dimensionality of the data used, in addition to the effective detection of abnormalities, identifying the variables most related to the fault. Furthermore, the work contains a methodology to validate the diagnosis results consisting of variable elimination from the best empirical models with the Backward Elimination algorithm. The proposed methodology achieved a promising performance, since it can decrease the dimensionality of the studied systems up to 93.55%, reducing uncertainties, and capturing nonlinear behaviors.
129

Quantitative Research on the Return of Private Seasoned Equity Offerings: Evidence from China

January 2017 (has links)
abstract: This paper quantitatively analyses the relation between the return of private seasoned equity offerings and variables of market and firm characteristics in China Ashare market. A multiple-factor linear regression model is constructed to estimate this relation and the result canhelp investors to determine the future return of private placement stocks. In this paper, I first review past theories about private placement stocks, including how the large shareholder participation, the discount of private offerings, the firm characteristics, and the investment on firm value will affect the return of private offerings. According to the past literature, I propose four main factors that may affect the return of private placement. They are the large shareholders participation in private placement; the discount that private placement could offer; the characteristics of the companies that offer a private placement and the intrinsic value of such companies. I adopt statistic and correlational analysis to test the impact of each factor. Then, according to this single-factor analysis, I set up a multiple-factor linear regression model on private seasoned equity offerings return in Chapter Four. In the last two chapters, I apply this quantitative model to other fields. I use this model to testify current financial products of private placement and develop investmen strategies on stocks with private seasoned equity offerings in secondary market. My quantitative strategy is useful according to the result of setback test. / Dissertation/Thesis / Doctoral Dissertation Business Administration 2017
130

Parâmetros genéticos e fenotípicos do perfil de ácidos graxos do leite de vacas da raça holandesa / Genetic and phenotypic parameters of the fatty acid profile of milk from Holstein cows

Mary Ana Petersen Rodriguez 05 July 2013 (has links)
Durante as últimas décadas, o melhoramento genético em bovinos leiteiros no Brasil baseou-se somente na importação de material genético, resultando em ganhos genéticos de pequena magnitude para as características de interesse econômico. Dessa forma, existe a necessidade eminente de avaliações genéticas dos animais sob condições nacionais de ambiente, de maneira a se prover um aumento na produção de leite aliado à qualidade. Neste contexto, o conhecimento sobre a composição do leite é de extrema importância para o entendimento de como alguns fatores ambientais e, principalmente genéticos podem influenciar no aumento dos conteúdos de proteína (PROT), gordura (GOR) e ácidos graxos (AG) benéficos e na redução da contagem de células somáticas, visando a melhoria da qualidade nutricional deste produto. Diante disso, o objetivo desse trabalho foi predizer os teores de AG de interesse usando regressão linear bayesiana, bem como estimar componentes de variância, coeficientes de herdabilidade e comparar modelos de diferentes ordens de ajuste por meio de funções polinomiais de Legendre, sob modelos de regressão aleatória. Amostras de leite foram submetidas a análises de cromatografia gasosa e espectrometria em infravermelho médio para determinação dos ácidos graxos. A comparação dos resultados obtidos por ambos os métodos foi realizada por meio da correlação de Pearson, análise de Bland-Altman e regressão linear bayesiana e, posteriormente, equações de predição foram desenvolvidas para os ácidos graxos mirístico (C14:0) e linoléico conjugado (CLA), a partir de regressões lineares simples e múltipla bayesiana considerando-se prioris nãoinformativas e informativas. Polinômios ortogonais de Legendre de 1ª a 6ª ordens foram utilizados para o ajuste das regressões aleatórias das características. A predição dos AG por meio da aplicação da regressão linear foi viável, com erros de predição variando entre 0,01 e 4,84g por 100g de gordura para o C14:0 e 0,002 e 1,85 por 100g de gordura para o CLA, sendo neste caso os menores erros de predição obtidos quando adotada a regressão múltipla com priori não informativa. Os modelos que melhor se ajustaram para GOR, PROT, C16:0, C18:0, C18:1c9, CLA, saturados (SAT), insaturados (INSAT), monoinsaturados (MONO) e poliinsaturados (POLI) foi o de 1ª ordem, e para escore de célula somática (ESC) e C14:0 o de 2ª ordem. As estimativas de herdabilidade obtidas variaram de 0,08 a 0,11 para GOR; 0,28 a 0,35 para PROT; 0,03 a 0,22 para ECS; 0,12 a 0,31 para C16:0; 0,08 a 0,14 para C18:0; 0,24 a 0,43 para C14:0; 0,07 a 0,17 para C18:1c9; 0,13 a 0,39 para CLA; 0,14 a 0,31 para SAT; 0,04 a 0,14 para INSAT; 0,04 a 0,13 para MONO; 0,09 a 0,20 para POLI e 0,12 para PROD, nos modelos que melhor se ajustaram. Concluise que melhorias na qualidade nutricional do leite podem ser obtidas por meio da inclusão das características produtivas e do perfil de ácidos graxos em programas de seleção genética. / During the last decades, genetic improvement in dairy cattle in Brazil was based only on the importation of genetic material, resulting in small genetic gains for economic interest traits. There is a perceived need for genetic evaluation under national environment conditions to provide an increase in milk production allied to quality. In this context, the knowledge of the milk composition is very important for understanding how certain environmental factors and especially genetic factors may influence the increase in protein content (PROT), fat (FAT), beneficial fatty acids (FA) and in reducing somatic cell count, aiming to improve the nutritional quality of this product. The aim of this study was to predict the levels of interest FA using Bayesian linear regression and estimate the components of variance, coefficients of heritability and compare models with different orders of adjustment by Legendre polynomials functions, in random regression models. Milk samples were subjected to gas chromatography analysis and mid-infrared spectrometry for the determination of fatty acids. The comparison of the results obtained by both methods was performed using Pearson\'s correlation, Bland-Altman analysis and Bayesian linear regression, subsequently, prediction equations were developed for the fatty acids myristic (C14:0) and conjugated linoleic (CLA) from simple linear regressions and multiple Bayesian considering non-informative and informative priors. Legendre orthogonal polynomials from 1st to 6th orders were used to fit the random regression of the traits. That was viable the prediction of FA by applying the linear regression with prediction errors ranging from 0.01 to 4.84 g per 100 g of fat for C14:0 and 0.002 to 1.85 per 100 g of fat for CLA, in this case the smaller prediction errors obtained when adopted the multiple regression with non-informative priori. The models that best fit for FAT, PROT, C16:0, C18:0, C18:1C9, CLA, saturated (SAT), unsaturated (UNSAT), monounsaturated (MONO) and polyunsaturated (POLY) was the one of 1st order and for somatic cell scores (SCS) and C14:0 the one of 2nd order. The estimates of heritability ranged from 0.08 to 0.11 for FAT; 0.28 to 0.35 for PROT; 0.03 to 0.22 for SCS; 0.12 to 0.31 for C16:0; 0.08 to 0.14 for C18:0; 0.24 to 0.43 for C14:0; 0.07 to 0.17 for C18:1C9; 0.13 to 0.39 for CLA; 0.14 to 0.31 for SAT; 0.04 to 0.14 for UNSAT; 0.04 to 0.13 for MONO, 0.09 to 0.20 for POLY and 0.12 for PROD, in the models that best fit. We conclude that improvements in the nutritional quality of milk can be obtained through the inclusion of productive traits and fatty acid profile in genetic selection programs.

Page generated in 0.0691 seconds