201 |
Ajuste do modelo de Orskov & McDonald (1979) a dados de degradação ruminal in situ utilizando mínimos quadrados ponderados / Orskov and McDonald?s model adjustment to ruminal degradation in situ data using weighed least squaresSoares, Ana Paula Meira 27 September 2007 (has links)
O presente trabalho teve como principal objetivo o estudo das diferenças entre os resultados obtidos com o uso do método dos mínimos quadrados ponderados e de mínimos quadrados ordinários, no ajuste do modelo de Orskov e McDonald (1979) aos dados de degradação da matéria seca (MS) e fibra em detergente ácido (FDA) em novilhos Nelore fistulados, utilizando a técnica in situ. Foram utilizados os dados de um experimento delineado em quadrado latino 4x4 (quatro animais e quatro períodos) cujos tratamentos foram: dieta com sal de cálcio de ácidos graxos e monensina (A); dieta com caroço de algodão e monensina (B); dieta controle com monensina (C) e dieta com caroço de algodão sem monensina (D). As medidas de degradabilidade foram coletadas em oito ocasiões (0, 3, 6, 12, 24, 48, 72 e 96 horas). Como essas medidas são obtidas repetidamente no mesmo animal, espera-se que as variâncias das respostas nas diversas ocasiões não sejam iguais. Nas análises propostas foram utilizados os dados originais (MS e FDA) e os dados corrigidos para os efeitos de animal e de período. De uma forma geral, observou-se que o uso do método dos mínimos quadrados ponderados alterou os resultados das análises, produzindo um aumento das estatísticas dos testes e uma alteração da significância dessas estatísticas, por conta da retirada do efeito de animal e período dos dados originais e ao uso do método de mínimos quadrados ponderados, com a ponderação feita pelo inverso da variância dos dados em cada ocasião. / The present work had as main objective the study of the differences between the results obtained using the method of the weighted least squares and ordinary least squares, in the fit of the model of Orskov and McDonald (1979) to the data of degradation of the dry matter (MS) and acid detergent fiber (ADF) in fistulated Nelore steers, using the technique in situ. The data of a delineated 4x4 Latin Square had been used (four animals and four periods) whose treatments had been: diet with calcium salt of fatty acid and monensin (A); diet with whole cottonseed and monensin (B); diet has control with monensin (C) and diet with whole cottonseed without monensin (D). The measures of degradability had been collected in eight occasions (0, 3, 6, 12, 24, 48, 72 and 96 hours). As these measures they are gotten repeatedly in the same animal, expects that the variances of the answers in the diverse occasions are not equal. In the analyses proposals the original data (MS and ADF) and the data corrected for the period and animal effect had been used. Of a general form, it was observed that the use of the method of the weighted least squares modified the results of the analyses, producing an increase of the statisticians of the tests and an alteration of the significance of these statisticians, for account of the withdrawal of the animal effect and period of the original data and to the use of the method of weighted least squares, with the weighted made for the inverse one of the variance of the given ones in each occasion.
|
202 |
Heuristic discovery and design of promoters for the fine-control of metabolism in industrially relevant microbesGilman, James January 2018 (has links)
Predictable, robust genetic parts including constitutive promoters are one of the defining attributes of synthetic biology. Ideally, candidate promoters should cover a broad range of expression strengths and yield homogeneous output, whilst also being orthogonal to endogenous regulatory pathways. However, such libraries are not always readily available in non-model organisms, such as the industrially relevant genus Geobacillus. A multitude of different approaches are available for the identification and de novo design of prokaryotic promoters, although it may be unclear which methodology is most practical in an industrial context. Endogenous promoters may be individually isolated from upstream of well-understood genes, or bioinformatically identified en masse. Alternatively, pre-existing promoters may be mutagenised, or mathematical abstraction can be used to model promoter strength and design de novo synthetic regulatory sequences. In this investigation, bioinformatic, mathematic and mutagenic approaches to promoter discovery were directly compared. Hundreds of previously uncharacterised putative promoters were bioinformatically identified from the core genome of four Geobacillus species, and a rational sampling method was used to select sequences for in vivo characterisation. A library of 95 promoters covered a 2-log range of expression strengths when characterised in vivo using fluorescent reporter proteins. Data derived from this experimental characterisation were used to train Artificial Neural Network, Partial Least Squares and Random Forest statistical models, which quantifiably inferred the relationship between DNA sequence and function. The resulting models showed limited predictive- but good descriptive-power. In particular, the models highlighted the importance of sequences upstream of the canonical -35 and -10 motifs for determining promoter function in Geobacillus. Additionally, two commonly used mutagenic techniques for promoter production, Saturation Mutagenesis of Flanking Regions and error-prone PCR, were applied. The resulting sequence libraries showed limited promoter activity, underlining the difficulty of deriving synthetic promoters in species where understanding of transcription regulation is limited. As such, bioinformatic identification and deep-characterisation of endogenous promoter elements was posited as the most practical approach for the derivation of promoter libraries in non-model organisms of industrial interest.
|
203 |
Empirical studies on stock return predictability and international risk exposureLu, Qinye January 2016 (has links)
This thesis consists of one stock return predictability study and two international risk exposure studies. The first study shows that the statistical significance of out-of-sample predictability of market returns given by Kelly and Pruitt (2013), using a partial least squares methodology, constructed from the valuation ratios of portfolios, is overstated for two reasons. Firstly, the analysis is conducted on gross returns rather than excess returns, and this raises the apparent predictability of the equity premium due to the inclusion of predictable movements of interest rates. Secondly, the bootstrap statistics used to assess out-of-sample significance do not account for small-sample bias in the estimated coefficients. This bias is well known to affect in-sample tests of significance and I show that it is also important for out-of-sample tests of significance. Accounting for both these effects can radically change the conclusions; for example, the recursive out-of-sample R2 values for the sample period 1965-2010 are insignificant for the prediction of one-year excess returns, and one-month returns, except in the case of the book-to-market ratios of six size- and value-sorted portfolios which are significant at the 10% level. The second study examines whether U.S. common stocks are exposed to international risks, which I define as shocks to foreign markets that are orthogonal to U.S. market returns. By sorting stocks on past exposure to this risk factor I show that it is possible to create portfolios with an ex-post spread in exposure to international risk. I examine whether the international risk is priced in the cross-section of U.S. stocks, and find that for small stocks an increase in exposure to international risk results in lower returns relative to the Fama-French three-factor model. I conduct similar analysis on a measure of the international value premium and find little evidence of this risk being priced in U.S. stocks. The third study examines whether a portfolios of U.S. stocks can mimic foreign index returns, thereby providing investors with the benefits of international diversification without the need to invest directly in assets that trade abroad. I test this proposition using index data from seven developed markets and eight emerging markets over the period 1975-2013. Portfolios of U.S. stocks are constructed out-of-sample to mimic these international indices using a step-wise procedure that selects from a variety of industry portfolios, stocks of multinational corporations, country funds and American depositary receipts. I also use a partial least squares approach to form mimicking portfolios. I show that investors are able to gain considerable exposure to emerging market indices using domestically traded stocks. However, for developed market indices it is difficult to obtain home-made exposure beyond the simple exposure of foreign indices to the U.S. market factor. Using mean-variance spanning tests I find that, with few exceptions, international indices do not improve over the investment frontier provided by the domestically constructed alternative of investing in the U.S. market index and portfolios of industries and multinational corporations.
|
204 |
Estimating the determinants of FDI in Transition economies: comparative analysis of the Republic of KosovoBerisha, Jetëmira January 2012 (has links)
This study develops a panel data analysis over 27 transition and post transition economies for the period 2003-2010. Its intent is to investigate empirically the true effect of seven variables into foreign flows and takes later on the advantage of observed findings to conduct a comparative analysis between Kosovo and regional countries such: Albania, Bosnia and Herzegovina, Macedonia, Montenegro and Serbia. As the breakdown period (2008-2010) was included in the data set used to modelling the behaviour of FDI, both Chow test and the time dummies technique suggest the presence of structural break. Ultimately, empirical results show that FDI is positively related with one year lagged effect of real GDP growth, trade openness, labour force, low level of wages proxied by remittances, real interest rate and the low level of corruption. Besides, the corporate income tax is found to be significant and inversely related with foreign flows. The comparative analysis referring the growth rate of real GDP shows that Kosovo has the most stable macroeconomic environment in the region, but still it is continuously confronted by the high deficit of trade balance and high rate of unemployment. Appart, the key obstacle that has abolished efforts for foreign investment attraction is found to be the trade blockade of...
|
205 |
On the regularization of the recursive least squares algorithm. / Sobre a regularização do algoritmo dos mínimos quadrados recursivos.Manolis Tsakiris 25 June 2010 (has links)
This thesis is concerned with the issue of the regularization of the Recursive Least-Squares (RLS) algorithm. In the first part of the thesis, a novel regularized exponentially weighted array RLS algorithm is developed, which circumvents the problem of fading regularization that is inherent to the standard regularized exponentially weighted RLS formulation, while allowing the employment of generic time-varying regularization matrices. The standard equations are directly perturbed via a chosen regularization matrix; then the resulting recursions are extended to the array form. The price paid is an increase in computational complexity, which becomes cubic. The superiority of the algorithm with respect to alternative algorithms is demonstrated via simulations in the context of adaptive beamforming, in which low filter orders are employed, so that complexity is not an issue. In the second part of the thesis, an alternative criterion is motivated and proposed for the dynamical regulation of regularization in the context of the standard RLS algorithm. The regularization is implicitely achieved via dithering of the input signal. The proposed criterion is of general applicability and aims at achieving a balance between the accuracy of the numerical solution of a perturbed linear system of equations and its distance from the analytical solution of the original system, for a given computational precision. Simulations show that the proposed criterion can be effectively used for the compensation of large condition numbers, small finite precisions and unecessary large values of the regularization. / Esta tese trata da regularização do algoritmo dos mínimos-quadrados recursivo (Recursive Least-Squares - RLS). Na primeira parte do trabalho, um novo algoritmo array com matriz de regularização genérica e com ponderação dos dados exponencialmente decrescente no tempo é apresentado. O algoritmo é regularizado via perturbação direta da inversa da matriz de auto-correlação (Pi) por uma matriz genérica. Posteriormente, as equações recursivas são colocadas na forma array através de transformações unitárias. O preço a ser pago é o aumento na complexidade computacional, que passa a ser de ordem cúbica. A robustez do algoritmo resultante ´e demonstrada via simula¸coes quando comparado com algoritmos alternativos existentes na literatura no contexto de beamforming adaptativo, no qual geralmente filtros com ordem pequena sao empregados, e complexidade computacional deixa de ser fator relevante. Na segunda parte do trabalho, um critério alternativo ´e motivado e proposto para ajuste dinâmico da regularização do algoritmo RLS convencional. A regularização é implementada pela adição de ruído branco no sinal de entrada (dithering), cuja variância é controlada por um algoritmo simples que explora o critério proposto. O novo critério pode ser aplicado a diversas situações; procura-se alcançar um balanço entre a precisão numérica da solução de um sistema linear de equações perturbado e sua distância da solução do sistema original não-perturbado, para uma dada precisão. As simulações mostram que tal critério pode ser efetivamente empregado para compensação de números de condicionamento (CN) elevados, baixa precisão numérica, bem como valores de regularização excessivamente elevados.
|
206 |
Least Squares Estimation of the Pareto Type I and II DistributionChien, Ching-hua 01 May 1982 (has links)
The estimation of the Pareto distribution can be computationally expensive and the method is badly biased. In this work, an improved Least Squares derivation is used and the estimation will be less biased. Numerical examples and figures are provided so that one may observe the solution more clearly. Furthermore, by varying the different methods of estimation, a comparing of the estimators of the parameters is given. The improved Least Squares derivation is confidently employed for it is economic and efficient.
|
207 |
Semiparametric Estimation of Unimodal DistributionsLooper, Jason K 20 August 2003 (has links)
One often wishes to understand the probability distribution of stochastic data from experiment or computer simulations. However, where no model is given, practitioners must resort to parametric or non-parametric methods in order to gain information about the underlying distribution. Others have used initially a nonparametric estimator in order to understand the underlying shape of a set of data, and then later returned with a parametric method to locate the peaks. However they are interested in estimating spectra, which may have multiple peaks, where in this work we are interested in approximating the peak position of a single-peak probability distribution.
One method of analyzing a distribution of data is by fitting a curve to, or smoothing them. Polynomial regression and least-squares fit are examples of smoothing methods. Initial understanding of the underlying distribution can be obscured depending on the degree of smoothing. Problems such as under and oversmoothing must be addressed in order to determine the shape of the underlying distribution. Furthermore, smoothing of skewed data can give a biased estimation of the peak position.
We propose two new approaches for statistical mode estimation based on the assumption that the underlying distribution has only one peak. The first method imposes the global constraint of unimodality locally, by requiring negative curvature over some domain. The second method performs a search that assumes a position of the distribution's peak and requires positive slope to the left, and negative slope to the right. Each approach entails a constrained least-squares fit to the raw cumulative probability distribution.
We compare the relative efficiencies [12] of finding the peak location of these two estimators for artificially generated data from known families of distributions Weibull, beta, and gamma. Within each family a parameter controls the skewness or kurtosis, quantifying the shapes of the distributions for comparison. We also compare our methods with other estimators such as the kernel-density estimator, adaptive histogram, and polynomial regression. By comparing the effectiveness of the estimators, we can determine which estimator best locates the peak position.
We find that our estimators do not perform better than other known estimators. We also find that our estimators are biased. Overall, an adaptation of kernel estimation proved to be the most efficient.
The results for the work done in this thesis will be submitted, in a different form, for publication by D.A. Rabson and J.K. Looper.
|
208 |
A Model of Global Marketing in Multinational Firms: An Emprirical InvestigationVenaik, Sunil, AGSM, UNSW January 1999 (has links)
With increasing globalisation of the world economy, there is growing interest in international business research among academics, business practitioners and public policy makers. As marketing is usually the first corporate function to internationalise, it occupies the centre-stage in the international strategy debate. The objective of this study is to understand the environmental and organisational factors that drive the desirable outcomes of learning, innovation and performance in multinational firms. By adapting the IO-based, resource-based and contingency theories, the study proposes the environment-conduct-outcome framework and a model of global marketing in MNCs. Using the structural equation modelling-based PLS methodology, the model is estimated with data from a global survey of marketing managers in MNC subsidiaries. The results show that the traditional international marketing strategy and organisational structure constructs of adaptation and autonomy do not have a significant direct effect on MNC performance. Instead, the effects are largely mediated by the networking, learning and innovation constructs that are included in the proposed model. The study also shows that, whereas collaborative decision making has a positive effect on interunit learning, subsidiary autonomy has a significant influence on innovativeness in MNC subsidiaries. Finally, it is found that marketing mix adaptation has an adverse impact on the performance of MNCs facing high global integration pressures but improves the performance of MNCs confronted with low global integration pressures. The findings have important implications for global marketing in MNCs. First, to enhance organisational learning and innovation and ultimately improve corporate performance, MNCs should simultaneously develop the potentially conflicting organisational attributes of collective decision-making among the subsidiaries and greater autonomy to the subsidiaries. Second, to tap local knowledge, MNCs should increasingly regard their country units as 'colleges' or 'seminaries' of learning rather than merely as 'subsidiaries' with secondary or subordinate roles. Finally, to improve MNC performance, the key requirement is to achieve a good fit between the global organisational structure, marketing strategy and business environment. Overall, the results provide partial support for the IO-based and resource-based views and strong support for the contingency perspective in international strategy.
|
209 |
Regression methods in multidimensional prediction and estimationBjörkström, Anders January 2007 (has links)
<p>In regression with near collinear explanatory variables, the least squares predictor has large variance. Ordinary least squares regression (OLSR) often leads to unrealistic regression coefficients. Several regularized regression methods have been proposed as alternatives. Well-known are principal components regression (PCR), ridge regression (RR) and continuum regression (CR). The latter two involve a continuous metaparameter, offering additional flexibility.</p><p>For a univariate response variable, CR incorporates OLSR, PLSR, and PCR as special cases, for special values of the metaparameter. CR is also closely related to RR. However, CR can in fact yield regressors that vary discontinuously with the metaparameter. Thus, the relation between CR and RR is not always one-to-one. We develop a new class of regression methods, LSRR, essentially the same as CR, but without discontinuities, and prove that any optimization principle will yield a regressor proportional to a RR, provided only that the principle implies maximizing some function of the regressor's sample correlation coefficient and its sample variance. For a multivariate response vector we demonstrate that a number of well-established regression methods are related, in that they are special cases of basically one general procedure. We try a more general method based on this procedure, with two meta-parameters. In a simulation study we compare this method to ridge regression, multivariate PLSR and repeated univariate PLSR. For most types of data studied, all methods do approximately equally well. There are cases where RR and LSRR yield larger errors than the other methods, and we conclude that one-factor methods are not adequate for situations where more than one latent variable are needed to describe the data. Among those based on latent variables, none of the methods tried is superior to the others in any obvious way.</p>
|
210 |
Design and Implementation of a Test Rig for a Gyro Stabilized Camera SystemEklånge, Johannes January 2006 (has links)
<p>PolyTech AB in Malmköping manufactures gyro stabilized camera systems or helicopter applications. In this Master´s Thesis a shaker test rig for vibration testing of these systems is designed, implemented and evaluated. The shaker is required to have an adjustable frequency and displacement and different shakers that meet these requirements are treated in a literature study.</p><p>The shaker chosen in the test rig is based on a mechanical solution that is described in detail. Additionally all components used in the test rig are described and modelled. The test rig is identified and evaluated from different experiments carried out at PolyTech, where the major part of the identification is based on data collected from accelerometers.</p><p>The test rig model is used to develop a controller that controls the frequency and the displacement of the shaker. A three-phase motor is used to control the frequency of the shaker and a linear actuator with a servo is used to control the displacement. The servo controller is designed using observer and state feedback techniques.</p><p>Additionally, the mount in which the camera system is hanging is modelled and identified, where the identification method is based on nonlinear least squares (NLS) curve fitting technique.</p>
|
Page generated in 0.0449 seconds