• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 171
  • 53
  • 40
  • 26
  • 19
  • 14
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 958
  • 958
  • 198
  • 176
  • 160
  • 157
  • 139
  • 137
  • 123
  • 114
  • 95
  • 92
  • 78
  • 77
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Least Squares Monte Carlo-metoden & korgoptioner : En kvantitativ studie

Sandin, Måns January 2019 (has links)
Inom bank och försäkringsbranschen finns behov av framtidsprognoser och riskmått kopplade till finansiella instrument. För att skapa prisfördelningar, som kan användas som grund till olika riskmått, används ibland nästlad simulering. För att göra detta simuleras först en stor mängd yttre scenarion för någon tillgång, som används i ett finanisellt instrument. Vilket görs genom att priser simuleras över en tidsperiod. Detta utgör tidshorisonten varvid prisfördelningen befinner sig. Utifrån varje yttre scenario simuleras sedan ett antal inre. Som i sin tur används för att prissätta finansiella instrumentet i det yttre scenariot. En metod som används för att prisätta de yttre scenariona är Monte Carlo-metoden, vilket kräver ett stort antal inre scenarion för att prissättningen ska bli korrekt. Detta gör metoden krävande i tidsåtgång och datorkraft. Least Squares Monte Carlo-metoden är en alternativ metod som använder sig av regression och minstakvadratmetoden för att utföra prissättningen med ett mindre antal inre scenarion. En regressionsfunktion anpassas efter yttre scenarionas värden och används sedan för att omvärdera dessa, vilket minskar felen som ett mindre antal slumptal annars skulle ge. Regressionsfunktionen kan även användas för att prissätta värden utanför de som den anpassas efter, vilket gör att den kan återanvändas vid liknande beräkningar. I detta arbete undersöks hur väl Least Squares Monte Carlo-metoden beskriver prisfördelningen för korgoptioner, som är optioner med flera underliggande tillgångar. Tester utförs med olika värden för parametrarna och vikt läggs vid vilken effekt yttre scenarionas längd har, samt hur väl priserna beskrivs i prisfördelningens svansar. Resultatet är delvis svåranalyserat på grund av många extrema värden, men visade på svårigheter med prissättningen vid längre yttre scenarion. Vilket kan bero på att regressionsfunktionen som användes hade svårt att anpassa sig efter och beskriva mer spridda prisfördelningar. Metoden fungerade också sämre i den nedre delen av prisfördelningen, något som den dock delar med Standard Monte Carlo. Mer forskning behövs för att undersöka vilken effekt andra uppsättningar regressionsfunktioner skulle ha på metoden. / In the banking and insurance industry, there exists a need for forecasting and measures of risk connecting to financial instruments. To create price distributions, used to create measures of risk, nested simulations are sometimes used. This is done by simulating a large amount of outer scenarios, for some asset in a financial instrument. Which is done by simulating prices over a certain time period. This now outlines the time horizon of the price distribution. From each outer scenario, some inner scenarios are simulated. Which in turn are used to price the financial instrument in the outer scenario. A common method for pricing the outer scenarios is the Monte Carlo method, which uses a large amount of random numbers for the pricing to be accurate. This makes the method time consuming, as well as requiring large amounts of computing power. The Least Squares Monte Carlo method is an alternative method, using regression and the least squares method to perform the pricing using a smaller amount of inner scenarios. A regression function is fitted to the values of the outer scenarios and then used to revalue these, reducing the error which a smaller number of random numbers otherwise would give. The regression function can also be used to price outside of the values used for the fitting, making it reusable in similar computations. This paper examines how well the Least Squares Monte Carlo-method describes the price distribution of basket options, which are options containing several underlying assets. Tests are made for different values for the parameters used and an emphasis is laid on the effect of the time length of the outer scenarios, also, how accurate the tails of the distribution are. The results are somewhat hard to analyze,due to some extreme values, but showed difficulties for the method, when pricing longer outer scenarios. This can be due to the regression function having problems fitting to - and valuing - broader price distributions. The method also performed worse in the lower parts of the distribution, something it shares with the standard Monte Carlo method. More research is needed to ascertain the effects of other regression functions.
162

Split algorithms for LMS adaptive systems.

January 1991 (has links)
by Ho King Choi. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1991. / Includes bibliographical references. / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Adaptive Filter and Adaptive System --- p.1 / Chapter 1.2 --- Applications of Adaptive Filter --- p.4 / Chapter 1.2.1 --- System Identification --- p.4 / Chapter 1.2.2 --- Noise Cancellation --- p.6 / Chapter 1.2.3 --- Echo Cancellation --- p.8 / Chapter 1.2.4 --- Speech Processing --- p.10 / Chapter 1.3 --- Chapter Summary --- p.14 / References --- p.15 / Chapter 2. --- Adaptive Filter Structures and Algorithms --- p.17 / Chapter 2.1 --- Filter Structures for Adaptive Filtering --- p.17 / Chapter 2.2 --- Adaptation Algorithms --- p.22 / Chapter 2.2.1 --- The LMS Adaptation Algorithm --- p.24 / Chapter 2.2.1.1 --- Convergence Analysis --- p.28 / Chapter 2.2.1.2 --- Steady State Performance --- p.33 / Chapter 2.2.2 --- The RLS Adaptation Algorithm --- p.35 / Chapter 2.3 --- Chapter Summary --- p.39 / References --- p.41 / Chapter 3. --- Parallel Split Adaptive System --- p.45 / Chapter 3.1 --- Parallel Form Adaptive Filter --- p.45 / Chapter 3.2 --- Joint Process Estimation with a Split-Path Adaptive Filter --- p.49 / Chapter 3.2.1 --- The New Adaptive System Identification Configuration --- p.53 / Chapter 3.2.2 --- Analysis of the Split-Path System Modeling Structure --- p.57 / Chapter 3.2.3 --- Comparison with the Non-Split Configuration --- p.63 / Chapter 3.2.4 --- Some Notes on Even Filter Order Case --- p.67 / Chapter 3.2.5 --- Simulation Results --- p.70 / Chapter 3.3 --- Autoregressive Modeling with a Split-Path Adaptive Filter --- p.75 / Chapter 3.3.1 --- The Split-Path Adaptive Filter for AR Modeling --- p.79 / Chapter 3.3.2 --- Analysis of the Split-Path AR Modeling Structure --- p.84 / Chapter 3.3.3 --- Comparison with Traditional AR Modeling System --- p.89 / Chapter 3.3.4 --- Selection of Step Sizes --- p.90 / Chapter 3.3.5 --- Some Notes on Odd Filter Order Case --- p.94 / Chapter 3.3.6 --- Simulation Results --- p.94 / Chapter 3.3.7 --- Application to Noise Cancellation --- p.99 / Chapter 3.4 --- Chapter Summary --- p.107 / References --- p.109 / Chapter 4. --- Serial Split Adaptive System --- p.112 / Chapter 4.1 --- Serial Form Adaptive Filter --- p.112 / Chapter 4.2 --- Time Delay Estimation with a Serial Split Adaptive Filter --- p.125 / Chapter 4.2.1 --- Adaptive TDE --- p.125 / Chapter 4.2.2 --- Split Filter Approach to Adaptive TDE --- p.132 / Chapter 4.2.3 --- Analysis of the New TDE System --- p.136 / Chapter 4.2.3.1 --- Least-mean-square Solution --- p.138 / Chapter 4.2.3.2 --- Adaptation Algorithm and Performance Evaluation --- p.142 / Chapter 4.2.4 --- Comparison with Traditional Adaptive TDE Method --- p.147 / Chapter 4.2.5 --- System Implementation --- p.148 / Chapter 4.2.6 --- Simulation Results --- p.148 / Chapter 4.2.7 --- Constrained Adaptation for the New TDE System --- p.156 / Chapter 4.3 --- Chapter Summary --- p.163 / References --- p.165 / Chapter 5. --- Extension of the Split Adaptive Systems --- p.167 / Chapter 5.1 --- The Generalized Parallel Split System --- p.167 / Chapter 5.2 --- The Generalized Serial Split System --- p.170 / Chapter 5.3 --- Comparison between the Parallel and the Serial Split Adaptive System --- p.172 / Chapter 5.4 --- Integration of the Two Forms of Split Predictors --- p.177 / Chapter 5.5 --- Application of the Integrated Split Model to Speech Encoding --- p.179 / Chapter 5.6 --- Chapter Summary --- p.188 / References --- p.139 / Chapter 6. --- Conclusions --- p.191 / References --- p.197
163

Improving collaborative forecasting performance in the food supply chain

Eksoz, Can January 2014 (has links)
The dynamic structure of the Food Supply Chain (FSC) distinguishes itself from other supply chains. Providing food to customers in a healthy and fresh manner necessitates a significant effort on the part of manufacturers and retailers. In practice, while these partners collaboratively forecast time-sensitive and / or short-life product-groups (e.g. perishable, seasonal, promotional and newly launched products), they confront significant challenges which prevent them from generating accurate forecasts and conducting long-term collaborations. Partners’ challenges are not limited only to the fluctuating demand of time-sensitive product-groups and continuously evolving consumer choices, but are also largely related to their conflicting expectations. Partners’ contradictory expectations mainly occur during the practices of integration, forecasting and information exchange in the FSC. This research specifically focuses on the Collaborative Forecasting (CF) practices in the FSC. However, CF is addressed from the manufacturers’ point of view, when they collaboratively forecast perishable, seasonal, promotional and newly launched products with retailers in the FSC. The underlying reasons are that while there is a paucity of research studying CF from the manufacturers’ standpoint, associated product-groups decay at short notice and their demand is influenced by uncertain consumer behaviour and the dynamic environment of FSC. The aim of the research is to identify factors that have a significant influence on the CF performance. Generating accurate forecasts over the aforementioned product-groups and sustaining long-term collaborations (one year or more) between partners are the two major performance criteria of CF in this research. This research systematically reviews the literature on Collaborative Planning, Forecasting and Replenishment (CPFR), which combines the supply chain practices of upstream and downstream members by linking their planning, forecasting and replenishment operations. The review also involves the research themes of supply chain integration, forecasting process and information sharing. The reason behind reviewing these themes is that partners’ CF is not limited to forecasting practices, it also encapsulates the integration of chains and bilateral information sharing for accurate forecasts. A single semi-structured interview with a UK based food manufacturer and three online group discussions on the business oriented social networking service of LinkedIn enrich the research with pragmatic and qualitative data, which are coded and analysed via software package QSR NVivo 9. Modifying the results of literature review through the qualitative data makes it possible to develop a rigorous conceptual model and associated hypotheses. Then, a comprehensive online survey questionnaire is developed to be delivered to food manufacturers located in the UK & Ireland, North America and Europe. An exploratory data analysis technique using Partial Least Squares (PLS) guides the research to analyse the online survey questionnaire empirically. The most significant contributions of this research are (i) to extend the body of literature by offering a new CF practice, aiming to improve forecast accuracy and long-term collaborations, and (ii) to provide managerial implications by offering a rigorous conceptual model guiding practitioners to implement the CF practice, for the achievement of accurate forecasts and long-term collaborations. In detail, the research findings primarily emphasise that manufacturers’ interdepartmental integration plays a vital role for successful CF and integration with retailers. Effective integration with retailers encourages manufacturers to conduct stronger CF in the FSC. Partners’ forecasting meetings are another significant factor for CF while the role of forecasters in these meetings is crucial too, implying forecasters’ indirect influence on CF. Complementary to past studies, this research further explores the manufacturers’ various information sources that are significant for CF and which should be shared with retailers. It is also significant to maintain the quality level of information whilst information is shared with retailers. This result accordingly suggests that the quality level of information is obliquely important for CF. There are two major elements that contribute to the literature. Firstly, relying on the particular product-groups in the FSC and examining CF from the manufacturers’ point of view not only closes a pragmatic gap in the literature, but also identifies new areas for future studies in the FSC. Secondly, the CF practice of this research demonstrates the increasing forecast satisfaction of manufacturers over the associated product-groups. Given the subjective forecast expectations of manufacturers, due to organisational objectives and market dynamics, demonstrating the significant impact of the CF practice on the forecast satisfaction leads to generalising its application to the FSC. Practitioners need to avail themselves of this research when they aim to collaboratively generate accurate forecasts and to conduct long-term collaborations over the associated product-groups. The benefits of this research are not limited to the FSC. Manufacturers in other industries can benefit from the research while they collaborate with retailers over similar product-groups having a short shelf life and / or necessitating timely and reliable forecasts. In addition, this research expands new research fields to academia in the areas of the supply chain, forecasting and information exchange, whilst it calls the interest of academics to particular product-groups in the FSC for future research. Nevertheless, this research is limited to dyad manufacturer-retailer forecast collaborations over a limited range of product-groups. This is another opportunity for academics to extend this research to different types of collaborations and products.
164

Uso de técnicas de previsão de demanda como ferramenta de apoio à gestão de emergências hospitalares com alto grau de congestionamento

Calegari, Rafael January 2016 (has links)
Os serviços de emergências hospitalares (EH) desempenham um papel fundamental no sistema de saúde, servindo de porta de entrada para hospitais e fornecendo cuidados para pacientes com lesões e doenças graves. No entanto, as EH em todo o mundo sofrem com o aumento da demanda e superlotação. Múltiplos fatores convergem simultaneamente para resultar nessa superlotação, porém a otimização do gerenciamento do fluxo dos pacientes pode auxiliar na redução do problema. Nesse contexto, o tempo de permanência dos pacientes na EH (TPEH) é consolidado na literatura como indicador de qualidade do fluxo de pacientes. O tema desta dissertação é a previsão e gestão da demanda em EH com alto grau de congestionamento, que é abordado através de três artigos científicos. O objeto de estudo é o Hospital de Clínicas de Porto Alegre (HCPA). No primeiro artigo, são aplicados quatro modelos de previsão da procura por atendimento na EH, avaliando-se a influência de fatores climáticos e de calendário. O segundo artigo utiliza a técnica de regressão por mínimos quadrados parciais (PLS – partial least squares) para previsão de quatro indicadores relacionados ao TPEH para hospitais com alto grau de congestionamento. O tempo médio de permanência (TM) na EH resultou em um modelo preditivo com melhor ajuste, com erro médio absoluto percentual (MAPE - mean absolute percent error) de 5,68%. O terceiro artigo apresenta um estudo de simulação para identificação dos fatores internos do hospital que influenciam o TPEH. O número de exames de tomografias e a taxa de ocupação nas enfermarias clínicas e cirúrgicas (ECC) foram as que mais influenciaram. / Emergency departments (ED) play a key role in the health system, serving as gateway to hospitals and providing care for patients with injuries and serious illnesses. However, EDs worldwide suffer from increased demand and overcrowding. Multiple factors simultaneously converge to result in such overcrowding, and the optimization of patient flow management can help reduce the problem. In this context, the length of stay of patients in ED (LSED) is consolidated in the literature as a patient flow quality indicator. This thesis deals with forecast and demand management in EDs with a high degree of congestion. The subject is covered in three scientific papers, all analyzing data from the Hospital de Clínicas de Porto Alegre’s ED. In the first paper we apply four demand forecasting models to predict demand for service in the ED, evaluating the influence of climatic and calendar factors. The second article uses partial least squares (PLS) regression to predict four indicators related to LSED. The mean length of stay in the ED resulted in a model with the best fit, with mean percent absolute error (MAPE) of 5.68%. The third article presents a simulation study to identify the internal hospital factors influencing LSED. The number of CT exams and the occupancy rate in the clinical and surgical wards were the most influential factors.
165

Previsão de níveis fluviais em tempo atual com modelo de regressão adaptativo: aplicação na bacia do rio Uruguai

Moreira, Giuliana Chaves January 2016 (has links)
Este trabalho avaliou o potencial da aplicação da técnica recursiva dos mínimos quadrados (MQR) para o ajuste em tempo atual dos parâmetros de modelos autorregressivos com variáveis exógenas (ARX), as quais são constituídas pelos níveis de montante para melhorar o desempenho das previsões de níveis fluviais em tempo atual. Três aspectos foram estudados em conjunto: variação do alcance escolhido para a previsão, variação da proporção da área controlada em bacias a montante e variação da área da bacia da seção de previsão. A pesquisa foi realizada em três dimensões principais: a) metodológica (sem recursividade; com recursividade; com recursividade e fator de esquecimento); b) temporal (6 alcances diferentes: 10, 24, 34, 48, 58 e 72 horas); e c) espacial (variação da área controlada da bacia e da área da bacia definida pela seção de previsão). A área de estudo escolhida para essa pesquisa foi a bacia do rio Uruguai com exutório no posto fluviométrico de Uruguaiana (190.000 km²) e as suas sub-bacias embutidas de Itaqui (131.000 km²), Passo São Borja (125.000km²), Garruchos (116.000 km²), Porto Lucena (95.200 km²), Alto Uruguai (82.300 km²) e Iraí (61.900 km²). Os dados de níveis fluviométricos, com leituras diárias às 07:00 e às 17:00 horas, foram fornecidos pela Companhia de Pesquisa de Recursos Minerais (CPRM), sendo utilizados os dados de 1/1/1991 a 30/6/2015. Para a análise de desempenho dos modelos, foi aplicado como estatística de qualidade o coeficiente de Nash-Sutcliffe (NS) e o quantil 0,95 dos erros absolutos (EA(0,95): erro que não foi ultrapassado com a frequência de 0,95). Observou-se que os erros EA(0,95) dos melhores modelos obtidos para cada bacia sempre aumentam com a redução da área controlada, ou seja, a qualidade das previsões diminui com o deslocamento da seção de controle de jusante para montante. O ganho na qualidade das previsões com a utilização dos recursos adaptativos torna-se mais evidente, especialmente quando observam-se os valores de EA(0,95), pois esta estatística é mais sensível, com diferenças maiores em relação ao coeficiente NS. Além disso, este é mais representativo para os erros maiores, que ocorrem justamente durante os eventos de inundações. De modo geral, foi observado que, à medida que diminui a área da bacia, é possível obter previsões com alcances cada vez menores. Porém a influência do tamanho da área controlada de bacias a montante melhora o desempenho de bacias menores quando se observam principalmente os erros EA(0,95). Por outro lado, se a proporção da bacia controlada de montante já é bastante grande, como é o caso das alternativas 1 e 2 utilizadas para previsão em Itaqui (entre 88,5% e 95,4 %, respectivamente), os recursos adaptativos não fazem muita diferença na obtenção de melhores resultados. Todavia, quando se observam bacias com menores áreas de montante controladas, como é o caso de Porto Lucena para a alternativa 2 (65% de área controlada), o ganho no desempenho dos modelos com a utilização dos recursos adaptativos completos (MQR+f.e: mínimos quadrados recursivos com fator de esquecimento) torna-se relevante. / This study evaluated the potential of the application of the recursive least squares technique (RLS) to adjust in real time the model parameters of the autoregressive models with exogenous variables (ARX), which consists of the upstream levels, to improve the performance of the forecasts of river levels in real time. Three aspects were studied jointly: the variation of the lead time chosen for the forecast, the variation in the proportion of controlled area in upstream basins and variation in the area of forecasting section of the basin. The research was conducted in three main dimensions: a) methodological (without recursion; with recursion; with recursion and forgetting factor); b) temporal (6 different lead times: 10, 24, 34, 48, 58 and 72 hours); and c) spatial (variation in the controlled area of the basin and the area of the basin defined by the forecast section). The study area chosen for this research was the Uruguay River basin with its outflow at the river gage station of Uruguaiana (190,000 km²) and its entrenched sub-basins in Itaqui (131,000 km²), Passo São Borja (125,000 km²), Garruchos (116,000 km²), Porto Lucena (95,200 km²), Alto Uruguai (82,300 km²), and Iraí (61,900 km²). The river levels data, with daily readings at 7am and 5pm, were provided by the Company of Mineral Resources Research (CPRM), with the data used from January 1, 1991 to June 30, 2015. We applied the Nash-Sutcliffe coefficient (NS) and the quantile 0.95 of absolute errors (EA(0,95): error has not been exceeded at the rate of 0.95) for the analysis of models performances. We observed that the errors EA(0.95) of the best models obtained for each basin always increase with the reduction of the controlled area then the quality of the forecasts decreases with displacement of the downstream control section upstream. The gain in quality of the forecasts with the use of adaptive resources becomes more evident especially when the observed values of EA(0.95) as this statistic is more sensitive with greater differences in relation to the Nash-Sutcliffe Coefficient (NS). Moreover, this is most representative for larger errors which occur precisely during flooding events. In general, we observed that, as much as the area of the basin decreases, it is possible to obtain forecasts with smaller lead times, but the influence of the size of the area controlled upstream basins improves the performance of smaller basins when observing, especially the errors EA (0.95). However, if the proportion of the upstream of controlled basin is already quite large - as in the case of the alternatives 1 and 2 used for forecast in Itaqui (between 88.5% and 95.4%, respectively) - the adaptive resources do not differ too much in getting better results. However, when observing basins with smaller areas controlled upstream - as is the case of Porto Lucena to alternative 2 (65% controlled area) - the performance gain of the models with the use of the complete adaptive resources (MQR+f.e.) becomes relevant.
166

Estimação de parâmetros de máquinas de indução através de ensaio de partida em vazio

Sogari, Paulo Antônio Brudna January 2017 (has links)
Neste trabalho são propostos métodos para a estimação de parâmetros de motores de indução através do método dos Mínimos Quadrados com medição apenas de tensões, correntes e resistência do estator em um ensaio de partida em vazio. São detalhados os procedimentos para o tratamento dos sinais medidos, além das estimações do fluxo magnético e da velocidade mecânica do motor. Para a estimação dos parâmetros elétricos, são propostos métodos que diferem nos requisitos e no tratamento dos parâmetros como invariantes ou variantes no tempo. Em relação a esse último caso, é empregado um método de estimação de parâmetros por janelas de dados, aplicando um modelo com parâmetros invariantes no tempo localmente em diversas partes do ensaio. São feitas simulações para validar os métodos propostos, e dados de ensaio de três motores de diferentes potências são utilizados para analisar a escala de variação paramétrica durante a partida. É feita uma comparação entre os resultados obtidos com e sem consideração de variação nos parâmetros. / In this work, methods are proposed to estimate the parameters of induction motors through the Least Squares method with the measurement of only voltages, currents and resistance of the stator in a no-load startup test. Procedures are detailed to process the measured signals, as well as to estimate magnetic flux and rotor mechanical speed. In order to estimate the electrical parameters, methods are proposed which differ in their requisites and in the treatment of parameters as time invariant or time-varying. For the latter, a methodology for parameter estimation through data windows is used, applying a model with time invariant parameters locally to different parts of the test. Simulations are made to validate the proposed methodology, and data from tests of three motors with different powers are used to analyze the scale of parameter variation during startup. A comparison is made between the results obtained with and without the consideration of variation in the parameters.
167

Time-varying linear predictive coding of speech signals.

Hall, Mark Gilbert January 1977 (has links)
Thesis. 1977. M.S.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / M.S.
168

Ajuste do modelo de Orskov & McDonald (1979) a dados de degradação ruminal in situ utilizando mínimos quadrados ponderados / Orskov and McDonald?s model adjustment to ruminal degradation in situ data using weighed least squares

Soares, Ana Paula Meira 27 September 2007 (has links)
O presente trabalho teve como principal objetivo o estudo das diferenças entre os resultados obtidos com o uso do método dos mínimos quadrados ponderados e de mínimos quadrados ordinários, no ajuste do modelo de Orskov e McDonald (1979) aos dados de degradação da matéria seca (MS) e fibra em detergente ácido (FDA) em novilhos Nelore fistulados, utilizando a técnica in situ. Foram utilizados os dados de um experimento delineado em quadrado latino 4x4 (quatro animais e quatro períodos) cujos tratamentos foram: dieta com sal de cálcio de ácidos graxos e monensina (A); dieta com caroço de algodão e monensina (B); dieta controle com monensina (C) e dieta com caroço de algodão sem monensina (D). As medidas de degradabilidade foram coletadas em oito ocasiões (0, 3, 6, 12, 24, 48, 72 e 96 horas). Como essas medidas são obtidas repetidamente no mesmo animal, espera-se que as variâncias das respostas nas diversas ocasiões não sejam iguais. Nas análises propostas foram utilizados os dados originais (MS e FDA) e os dados corrigidos para os efeitos de animal e de período. De uma forma geral, observou-se que o uso do método dos mínimos quadrados ponderados alterou os resultados das análises, produzindo um aumento das estatísticas dos testes e uma alteração da significância dessas estatísticas, por conta da retirada do efeito de animal e período dos dados originais e ao uso do método de mínimos quadrados ponderados, com a ponderação feita pelo inverso da variância dos dados em cada ocasião. / The present work had as main objective the study of the differences between the results obtained using the method of the weighted least squares and ordinary least squares, in the fit of the model of Orskov and McDonald (1979) to the data of degradation of the dry matter (MS) and acid detergent fiber (ADF) in fistulated Nelore steers, using the technique in situ. The data of a delineated 4x4 Latin Square had been used (four animals and four periods) whose treatments had been: diet with calcium salt of fatty acid and monensin (A); diet with whole cottonseed and monensin (B); diet has control with monensin (C) and diet with whole cottonseed without monensin (D). The measures of degradability had been collected in eight occasions (0, 3, 6, 12, 24, 48, 72 and 96 hours). As these measures they are gotten repeatedly in the same animal, expects that the variances of the answers in the diverse occasions are not equal. In the analyses proposals the original data (MS and ADF) and the data corrected for the period and animal effect had been used. Of a general form, it was observed that the use of the method of the weighted least squares modified the results of the analyses, producing an increase of the statisticians of the tests and an alteration of the significance of these statisticians, for account of the withdrawal of the animal effect and period of the original data and to the use of the method of weighted least squares, with the weighted made for the inverse one of the variance of the given ones in each occasion.
169

Heuristic discovery and design of promoters for the fine-control of metabolism in industrially relevant microbes

Gilman, James January 2018 (has links)
Predictable, robust genetic parts including constitutive promoters are one of the defining attributes of synthetic biology. Ideally, candidate promoters should cover a broad range of expression strengths and yield homogeneous output, whilst also being orthogonal to endogenous regulatory pathways. However, such libraries are not always readily available in non-model organisms, such as the industrially relevant genus Geobacillus. A multitude of different approaches are available for the identification and de novo design of prokaryotic promoters, although it may be unclear which methodology is most practical in an industrial context. Endogenous promoters may be individually isolated from upstream of well-understood genes, or bioinformatically identified en masse. Alternatively, pre-existing promoters may be mutagenised, or mathematical abstraction can be used to model promoter strength and design de novo synthetic regulatory sequences. In this investigation, bioinformatic, mathematic and mutagenic approaches to promoter discovery were directly compared. Hundreds of previously uncharacterised putative promoters were bioinformatically identified from the core genome of four Geobacillus species, and a rational sampling method was used to select sequences for in vivo characterisation. A library of 95 promoters covered a 2-log range of expression strengths when characterised in vivo using fluorescent reporter proteins. Data derived from this experimental characterisation were used to train Artificial Neural Network, Partial Least Squares and Random Forest statistical models, which quantifiably inferred the relationship between DNA sequence and function. The resulting models showed limited predictive- but good descriptive-power. In particular, the models highlighted the importance of sequences upstream of the canonical -35 and -10 motifs for determining promoter function in Geobacillus. Additionally, two commonly used mutagenic techniques for promoter production, Saturation Mutagenesis of Flanking Regions and error-prone PCR, were applied. The resulting sequence libraries showed limited promoter activity, underlining the difficulty of deriving synthetic promoters in species where understanding of transcription regulation is limited. As such, bioinformatic identification and deep-characterisation of endogenous promoter elements was posited as the most practical approach for the derivation of promoter libraries in non-model organisms of industrial interest.
170

Empirical studies on stock return predictability and international risk exposure

Lu, Qinye January 2016 (has links)
This thesis consists of one stock return predictability study and two international risk exposure studies. The first study shows that the statistical significance of out-of-sample predictability of market returns given by Kelly and Pruitt (2013), using a partial least squares methodology, constructed from the valuation ratios of portfolios, is overstated for two reasons. Firstly, the analysis is conducted on gross returns rather than excess returns, and this raises the apparent predictability of the equity premium due to the inclusion of predictable movements of interest rates. Secondly, the bootstrap statistics used to assess out-of-sample significance do not account for small-sample bias in the estimated coefficients. This bias is well known to affect in-sample tests of significance and I show that it is also important for out-of-sample tests of significance. Accounting for both these effects can radically change the conclusions; for example, the recursive out-of-sample R2 values for the sample period 1965-2010 are insignificant for the prediction of one-year excess returns, and one-month returns, except in the case of the book-to-market ratios of six size- and value-sorted portfolios which are significant at the 10% level. The second study examines whether U.S. common stocks are exposed to international risks, which I define as shocks to foreign markets that are orthogonal to U.S. market returns. By sorting stocks on past exposure to this risk factor I show that it is possible to create portfolios with an ex-post spread in exposure to international risk. I examine whether the international risk is priced in the cross-section of U.S. stocks, and find that for small stocks an increase in exposure to international risk results in lower returns relative to the Fama-French three-factor model. I conduct similar analysis on a measure of the international value premium and find little evidence of this risk being priced in U.S. stocks. The third study examines whether a portfolios of U.S. stocks can mimic foreign index returns, thereby providing investors with the benefits of international diversification without the need to invest directly in assets that trade abroad. I test this proposition using index data from seven developed markets and eight emerging markets over the period 1975-2013. Portfolios of U.S. stocks are constructed out-of-sample to mimic these international indices using a step-wise procedure that selects from a variety of industry portfolios, stocks of multinational corporations, country funds and American depositary receipts. I also use a partial least squares approach to form mimicking portfolios. I show that investors are able to gain considerable exposure to emerging market indices using domestically traded stocks. However, for developed market indices it is difficult to obtain home-made exposure beyond the simple exposure of foreign indices to the U.S. market factor. Using mean-variance spanning tests I find that, with few exceptions, international indices do not improve over the investment frontier provided by the domestically constructed alternative of investing in the U.S. market index and portfolios of industries and multinational corporations.

Page generated in 0.0546 seconds