• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • 147
  • 42
  • 32
  • 24
  • 15
  • 13
  • 6
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 636
  • 636
  • 208
  • 125
  • 114
  • 89
  • 88
  • 88
  • 74
  • 68
  • 60
  • 60
  • 59
  • 57
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A General-Purpose GPU Reservoir Computer

Keith, Tūreiti January 2013 (has links)
The reservoir computer comprises a reservoir of possibly non-linear, possibly chaotic dynamics. By perturbing and taking outputs from this reservoir, its dynamics may be harnessed to compute complex problems at “the edge of chaos”. One of the first forms of reservoir computer, the Echo State Network (ESN), is a form of artificial neural network that builds its reservoir from a large and sparsely connected recurrent neural network (RNN). The ESN was initially introduced as an innovative solution to train RNNs which, up until that point, was a notoriously difficult task. The innovation of the ESN is that, rather than train the RNN weights, only the output is trained. If this output is assumed to be linear, then linear regression may be used. This work presents an effort to implement the Echo State Network, and an offline linear regression training method based on Tikhonov regularisation. This implementation targeted the general purpose graphics processing unit (GPU or GPGPU). The behaviour of the implementation was examined by comparing it with a central processing unit (CPU) implementation, and by assessing its performance against several studied learning problems. These assessments were performed using all 4 cores of the Intel i7-980 CPU and an Nvidia GTX480. When compared with a CPU implementation, the GPU ESN implementation demonstrated a speed-up starting from a reservoir size of between 512 and 1,024. A maximum speed-up of approximately 6 was observed at the largest reservoir size tested (2,048). The Tikhonov regularisation (TR) implementation was also compared with a CPU implementation. Unlike the ESN execution, the GPU TR implementation was largely slower than the CPU implementation. Speed-ups were observed at the largest reservoir and state history sizes, the largest of which was 2.6813. The learning behaviour of the GPU ESN was tested on three problems, a sinusoid, a Mackey-Glass time-series, and a multiple superimposed oscillator (MSO). The normalised root-mean squared errors of the predictors were compared. The best observed sinusoid predictor outperformed the best MSO predictor by 4 orders of magnitude. In turn, the best observed MSO predictor outperformed the best Mackey-Glass predictor by 2 orders of magnitude.
72

Exact Markov chain Monte Carlo and Bayesian linear regression

Bentley, Jason Phillip January 2009 (has links)
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
73

Utilizavimo proceso laiko eilučių modelis / Time series model for waste utilization

Michailova, Olga 30 June 2014 (has links)
Šiame darbe buvo atlikta gyvūninės kilmės atliekų utilizavimo proceso analizė. Pagrindinis uždavinys- rasti būdą prognozuoti utilizavimo proceso pabaigą ir tuo sumažinti energijos suvartojimą. Naudojausi laiko eilučių prognozavimo modeliu. Aprašiau savo metodą pasikeitimo taškui rasti. Taip pat buvo panaudota tiesinė regresija. Galimybė prognozuoti pasikeitimo tašką leistų žymiai sumažinti utilizavimo proceso savikainą. / I this work, an analysis of animal waste utilization process was performed. The main task was to find a way to predict the end of the desiccation process, because possibility to predict this end point may reduce energy consumption. I used the time series forcasting model and proposed method for the change point detection. Linear regression was also used for this task.
74

Power Analysis in Applied Linear Regression for Cell Type-Specific Differential Expression Detection

Glass, Edmund 01 January 2016 (has links)
The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from any tissues suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. However, this variability may actually be advantageous, as heterogeneous gene expression measurements coupled with cell counts may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression. Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect the performance of linear regression. In this dissertation we specifically address the parameter space involved in the most rigorous use of linear regression to estimate cell type-specific differential expression and report under which conditions significant detection is probable. We define parameters affecting the sensitivity of cell type-specific differential expression estimation as follows: sample size, cell type-specific proportion variability, mean squared error (spread of observations around linear regression line), conditioning of the cell proportions predictor matrix, and the size of actual cell type-specific differential expression. Each parameter, with the exception of cell type-specific differential expression (effect size), affects the variability of cell type-specific differential expression estimates. We have developed a power-analysis approach to cell type by cell type and genomic site by site differential expression detection which relies upon Welch’s two-sample t-test and factors in differences in cell type-specific expression estimate variability and reduces false discovery. To this end we have published an R package, LRCDE, available in GitHub (http://www.github.com/ERGlass/lrcde.dev) which outputs observed statistics of cell type-specific differential expression, including two-sample t- statistic, t-statistic p-value, and power calculated from two-sample t-statistic on a genomic site- by-site basis.
75

Odhady parametrů založené na zaokrouhlených datech / Estimates of parameters based on rounded data

Dortová, Zuzana January 2016 (has links)
This work discusses estimates based on rounded data. The work describes the estimates of parameters in time series AR and MA and in linear regression, the work presents different kinds of estimates based on rounded data. The work focuses on time series model AR(1) and linear regression, where simulations are added to theories and methods are compared on rounded and unrounded data. In addition, the comparison of linear regression is shown on the exemple of graph data. Powered by TCPDF (www.tcpdf.org)
76

Porovnání prediktorů / Comparing of predictors

Jelínková, Hana January 2011 (has links)
No description available.
77

Análise da evapotranspiração de referência a partir de medidas lisimétricas e ajuste estatístico de estimativas de nove equações empírico-teóricas com base na equação de Penman-Monteith / Analysis of the reference evapotranspiration from lysimetric data and statistical tuning of nine empiric equations based on the Penman-Monteith equation

Medeiros, Patrick Valverde 24 April 2008 (has links)
A quantificação da evapotranspiração é uma tarefa essencial para a determinação do balanço hídrico em uma bacia hidrográfica e para o estabelecimento do déficit hídrico de uma cultura. Nesse sentido, o presente trabalho aborda a análise da evapotranspiração de referência (ETo) para a região de Jaboticabal-SP. O comportamento do fenômeno na região foi estudado a partir da interpretação de dados de uma bateria de 12 lisímetros de drenagem (EToLis) e estimativas teóricas por 10 equações diferentes disponíveis na literatura. A análise estatística de correlação indica que as estimativas da ETo por equações teóricas comparadas à EToLis medida em lisímetro de drenagem não apresentaram bons índices de comparação e erro. Admitindo que a operação dos lisímetros não permitiu a determinação da ETo com boa confiabilidade, propôs-se um ajuste local das demais metodologias de estimativa da ETo, através de auto-regressão (AR) dos ruídos destas equações em comparação com uma média anual estimada pela equação de Penman-Monteith (EToPM), tomada como padrão, em períodos quinzenal e mensal. O ajuste através de regressão linear simples também foi analisado. Os resultados obtidos indicam que a radiação efetiva é a variável climática de maior importância para o estabelecimento da ETo na região. A estimativa pela equação de Penman-Monteith apresentou excelente concordância com as equações de Makkink (1957) e do balanço de energia. Os ajustes locais propostos apresentaram excelentes resultados para a maioria das equações testadas, dando-se destaque às equações da radiação solar FAO-24, de Makkink (1957), de Jensen-Haise (1963), de Camargo (1971), do balanço de radiação, de Turc (1961) e de Thornthwaite (1948). O ajuste por regressão linear simples é de mais fácil execução e apresentou excelentes resultados. / The quantification of the evapotranspiration is an essential task for the determination of the water balance in a watershed and for the establishment of the culture´s water deficit. Therefore, the present work describes the analysis of the reference evapotranspiration (ETo) for the region of Jaboticabal-SP. The phenomenon behavior in the region was studied based on the interpretation of 12 drainage lysimeters data (EToLis) and on theoretical estimates for 10 different equations available in the Literature. An statistical analysis indicated that the theoretical ETo estimates compared with the EToLis did not present good indices of comparison and error. Admitting that the lysimeters operation did not allow a reliable ETo determination, a local adjustment of the theoretical methodologies for ETo estimate was considered. An auto-regression (AR) of the noises of these equations in comparison with the annual average estimate for the Penman-Monteith equation (EToPM), taken as standard, has been performed in fortnightly and monthly periods. The adjustment through simple linear regression has also been analyzed. The obtained results indicate that the effective radiation is the most important climatic variable for the establishment of the ETo in the region. The Penman-Monteith estimate presented excellent correlation to the estimates by Makkink (1957) equation and the energy balance. The local adjustments presented excellent results for the majority of the tested equations, specially for the solar radiation FAO-24, Makkink (1957), Jensen-Haise (1963), Camargo (1971), radiation balance, Turc (1961) and Thornthwaite (1948) equations. The adjustment by simple linear regression is of easier execution and also presented excellent results.
78

Estudo do erro de posicionamento do eixo X em função da temperatura de um centro de usinagem / Study of the X axis error positioning in the function of the machining tool temperature

Nascimento, Cláudia Hespanholo 07 August 2015 (has links)
Na atual indústria de manufatura, destacam-se as empresas que sejam capazes de atender a demanda de produção de forma rápida e com produtos de qualidade. Durante a fabricação existem diversas fontes de erro que interferem na exatidão do processo de usinagem. Deste modo, torna-se importante o conhecimento destes erros para que técnicas de correção possam ser implementadas ao controle numérico da MF (máquina-ferramenta) e assim, melhorar a exatidão do processo. Neste contexto, o objetivo principal do trabalho é desenvolver uma metodologia para corrigir os erros de posicionamento do eixo X levando em consideração a variação de temperatura medida experimentalmente em pontos específicos da MF. Primeiramente foi realizado um levantamento dos erros de posicionamento experimentais ao longo do eixo X da MF em três diferentes condições de trabalho e simultaneamente havia um sistema para medir a variação de temperatura. Os dados foram tratados e em seguida sintetizados utilizando a metodologia das matrizes homogêneas de transformação, onde foi possível armazenar todos os erros de posicionamento referentes à trajetória da mesa da MF ao longo do eixo X. Os elementos da matriz resultante são utilizados como dados de entrada para análise de regressão linear múltipla que através dos métodos dos mínimos quadrados, correlaciona as variáveis de temperatura e erros de posicionamento. Como resultado, as equações lineares obtidas no método de análise de regressão geram valores previstos para os erros de posicionamento que são utilizados para correção destes erros. Estas equações exigem baixo custo computacional e portanto, podem ser futuramente implementadas no controle numérico da MF para corrigir os erros de posicionamento devido às deformações térmicas. Os resultados finais mostraram que erros de 60 µm foram reduzidos à 10 µm. Constatou-se a importância da sintetização dos erros de posicionamento nas matrizes homogêneas de transformação para aplica-los ao método de regressão. / In today\'s manufacturing industry, companies stand out if they\'re able to meet a high production demand efficiently and with quality products. During manufacturing there are several sources of error that can affect the accuracy of the machining process. Thus, it becomes important to better understand these errors to allow correction techniques to be implemented into the numerical control of the machine tool (MT) and thus improve process accuracy. In this context, the main goal of this work is to develop a method for correcting positioning errors along the X axis taking into consideration the variation in temperature, measured experimentally in specific points of the MT. First we conducted a survey of experimental positioning errors along the X axis of the MT in three different working conditions and simultaneously collecting temperature variation data. Data were treated and then synthesized using the methodology of homogeneous transformation matrices, where it was possible to store all positioning errors related to the trajectory of the board of the MT along the X axis. The elements of the matrix resulting from the homogeneous transformation are used as input data for the multiple linear regression analysis by the methods of least squares, which correlates the temperature variables with the positioning errors. As a result, linear equations obtained from the regression analysis method generates predicted values for the positioning errors which are used to correct this errors. These equations require low computer processing and therefore can be further implemented into the numerical control of the MT to correct positioning errors due to thermal deformation. The final results showed that 60 µm errors were reduced to 10 µm. It was noted the importance of synthesizing the positioning errors in homogeneous transformation matrices to apply them to the regression method.
79

Parâmetros genéticos e fenotípicos do perfil de ácidos graxos do leite de vacas da raça holandesa / Genetic and phenotypic parameters of the fatty acid profile of milk from Holstein cows

Rodriguez, Mary Ana Petersen 05 July 2013 (has links)
Durante as últimas décadas, o melhoramento genético em bovinos leiteiros no Brasil baseou-se somente na importação de material genético, resultando em ganhos genéticos de pequena magnitude para as características de interesse econômico. Dessa forma, existe a necessidade eminente de avaliações genéticas dos animais sob condições nacionais de ambiente, de maneira a se prover um aumento na produção de leite aliado à qualidade. Neste contexto, o conhecimento sobre a composição do leite é de extrema importância para o entendimento de como alguns fatores ambientais e, principalmente genéticos podem influenciar no aumento dos conteúdos de proteína (PROT), gordura (GOR) e ácidos graxos (AG) benéficos e na redução da contagem de células somáticas, visando a melhoria da qualidade nutricional deste produto. Diante disso, o objetivo desse trabalho foi predizer os teores de AG de interesse usando regressão linear bayesiana, bem como estimar componentes de variância, coeficientes de herdabilidade e comparar modelos de diferentes ordens de ajuste por meio de funções polinomiais de Legendre, sob modelos de regressão aleatória. Amostras de leite foram submetidas a análises de cromatografia gasosa e espectrometria em infravermelho médio para determinação dos ácidos graxos. A comparação dos resultados obtidos por ambos os métodos foi realizada por meio da correlação de Pearson, análise de Bland-Altman e regressão linear bayesiana e, posteriormente, equações de predição foram desenvolvidas para os ácidos graxos mirístico (C14:0) e linoléico conjugado (CLA), a partir de regressões lineares simples e múltipla bayesiana considerando-se prioris nãoinformativas e informativas. Polinômios ortogonais de Legendre de 1ª a 6ª ordens foram utilizados para o ajuste das regressões aleatórias das características. A predição dos AG por meio da aplicação da regressão linear foi viável, com erros de predição variando entre 0,01 e 4,84g por 100g de gordura para o C14:0 e 0,002 e 1,85 por 100g de gordura para o CLA, sendo neste caso os menores erros de predição obtidos quando adotada a regressão múltipla com priori não informativa. Os modelos que melhor se ajustaram para GOR, PROT, C16:0, C18:0, C18:1c9, CLA, saturados (SAT), insaturados (INSAT), monoinsaturados (MONO) e poliinsaturados (POLI) foi o de 1ª ordem, e para escore de célula somática (ESC) e C14:0 o de 2ª ordem. As estimativas de herdabilidade obtidas variaram de 0,08 a 0,11 para GOR; 0,28 a 0,35 para PROT; 0,03 a 0,22 para ECS; 0,12 a 0,31 para C16:0; 0,08 a 0,14 para C18:0; 0,24 a 0,43 para C14:0; 0,07 a 0,17 para C18:1c9; 0,13 a 0,39 para CLA; 0,14 a 0,31 para SAT; 0,04 a 0,14 para INSAT; 0,04 a 0,13 para MONO; 0,09 a 0,20 para POLI e 0,12 para PROD, nos modelos que melhor se ajustaram. Concluise que melhorias na qualidade nutricional do leite podem ser obtidas por meio da inclusão das características produtivas e do perfil de ácidos graxos em programas de seleção genética. / During the last decades, genetic improvement in dairy cattle in Brazil was based only on the importation of genetic material, resulting in small genetic gains for economic interest traits. There is a perceived need for genetic evaluation under national environment conditions to provide an increase in milk production allied to quality. In this context, the knowledge of the milk composition is very important for understanding how certain environmental factors and especially genetic factors may influence the increase in protein content (PROT), fat (FAT), beneficial fatty acids (FA) and in reducing somatic cell count, aiming to improve the nutritional quality of this product. The aim of this study was to predict the levels of interest FA using Bayesian linear regression and estimate the components of variance, coefficients of heritability and compare models with different orders of adjustment by Legendre polynomials functions, in random regression models. Milk samples were subjected to gas chromatography analysis and mid-infrared spectrometry for the determination of fatty acids. The comparison of the results obtained by both methods was performed using Pearson\'s correlation, Bland-Altman analysis and Bayesian linear regression, subsequently, prediction equations were developed for the fatty acids myristic (C14:0) and conjugated linoleic (CLA) from simple linear regressions and multiple Bayesian considering non-informative and informative priors. Legendre orthogonal polynomials from 1st to 6th orders were used to fit the random regression of the traits. That was viable the prediction of FA by applying the linear regression with prediction errors ranging from 0.01 to 4.84 g per 100 g of fat for C14:0 and 0.002 to 1.85 per 100 g of fat for CLA, in this case the smaller prediction errors obtained when adopted the multiple regression with non-informative priori. The models that best fit for FAT, PROT, C16:0, C18:0, C18:1C9, CLA, saturated (SAT), unsaturated (UNSAT), monounsaturated (MONO) and polyunsaturated (POLY) was the one of 1st order and for somatic cell scores (SCS) and C14:0 the one of 2nd order. The estimates of heritability ranged from 0.08 to 0.11 for FAT; 0.28 to 0.35 for PROT; 0.03 to 0.22 for SCS; 0.12 to 0.31 for C16:0; 0.08 to 0.14 for C18:0; 0.24 to 0.43 for C14:0; 0.07 to 0.17 for C18:1C9; 0.13 to 0.39 for CLA; 0.14 to 0.31 for SAT; 0.04 to 0.14 for UNSAT; 0.04 to 0.13 for MONO, 0.09 to 0.20 for POLY and 0.12 for PROD, in the models that best fit. We conclude that improvements in the nutritional quality of milk can be obtained through the inclusion of productive traits and fatty acid profile in genetic selection programs.
80

Uso de análise de rede social como estratégia de negócio na indústria cinematográfica / The use of social network as a business strategy in the movie industry

Dourado, Rafaela Costa Martins de Mello 16 October 2017 (has links)
Bilhões de dólares movimentam a indústria cinematográfica mundial anualmente. Por esse motivo, diversos estudos científicos têm intrigado pesquisadores e investidores que buscam prever a bilheteria de um filme. Ainda assim, os estudos preditivos pré-produção são escassos, e não existem propostas de pesquisa que utilizem atores e diretores como vínculo entre filmes. Essa é uma ideia sensata, pois, por diversas vezes, as altas bilheterias acompanham contratações de atores e diretores aclamados no universo cinematográfico. Posto isso, neste trabalho, buscou-se responder à pergunta de pesquisa: É possível prever receitas de bilheteria de cinema utilizando o relacionamento entre atores e diretores como indicador social? Através de técnicas de Social Network Analysis, que tratam da descrição de padrões de relacionamento entre membros de uma rede, e examinam como o envolvimento nessa rede ajuda a explicar comportamento e atitudes desses membros, descreveu-se uma forma inédita de utilização das métricas de SNA em um modelo de regressão linear múltipla, para estimar a bilheteria de um filme, dada a contratação de determinado diretor. A rede foi construída com informações dos atores principais e diretores de 1,144 filmes, de 2000 a 2013, e o modelo proposto validado com informações de filmes de 2014 a 2016. Apresenta-se ainda uma descrição exploratória detalhada da rede cinema, comparando atores e diretores e explorando seus relacionamentos pela análise de rede social. Como resultado, identificou-se atores e diretores líderes da rede cinema, e comunidades de atores e diretores influentes, que podem ser utilizadas para ações de marketing mais efetivas, maior notoriedade de eventos na mídia e contratações de elenco e direção. Isso para aumentar a disseminação do filme e, assim, maximizar a bilheteria para novas produções. Além disso, com um coeficiente de determinação adequado, o modelo proposto explica 67.66% da variabilidade da raiz quadrada da bilheteria, dada a contratação de determinado diretor. Dessa forma, concluiu-se que os conceitos e métricas de SNA, associados à inferência estatística, podem ser utilizados como estratégia de negócio na escolha do diretor de uma nova produção cinematográfica, visando ao sucesso pela maximização da bilheteria mundial. / Billions of dollars mobilize the movie industry annually. Because of that, many scientific studies are bothering researchers and investigators that seek to estimate a blockbuster. Nonetheless, pre-production predictive studies are rare, and there are no research proposals that use actors and directors as links between movies. This idea is reasonable, as the hiring of celebrated actors and directories from the movie universe usually follows blockbusters. In this context, the aim if this research was to answer the following question: Is it possible to estimate blockbusters through the relationship between actors and directors and a social indicator? By means of Social Network Analysis, that deals with the description of relationship patterns between members of a network - and investigates how the engagement in this network helps to explain these members´ behaviors and attitudes - we described a new way of using the SNA metrics in a multiple linear regression model to estimate a blockbuster, based on the hiring of a specific director. The network was built with information from the main actors and directors of 1.144 movies, from 2000 to 2013, and the proposed model was validated with other movies information, from 2014 to 2016. We also present a detailed exploratory description of the movie network, comparing actors and directors, and exploring their relationships through the social network analysis. As a result, we identified actors and directors considered leaders of the movie network, as well as communities of influent actors and directors that could be used for more effective marketing actions, more visibility of media events, and hiring of cast and direction. Furthermore, with anappropriatedetermination coefficient, the proposed model explain 67.66% of the blockbuster\'s square root variability based on the hiring of a specific director. In this way, we concluded that the SNA concepts and metrics can be used as a business strategy in choosing the director of a new movie production, together with the statistic inference, so as to maximize the global box office success.

Page generated in 0.0628 seconds