• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 259
  • 147
  • 41
  • 30
  • 23
  • 14
  • 13
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 619
  • 619
  • 208
  • 124
  • 114
  • 87
  • 85
  • 85
  • 73
  • 67
  • 60
  • 58
  • 58
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Automatic regularization technique for the estimation of neural receptive fields

Park, Mijung 02 November 2010 (has links)
A fundamental question on visual system in neuroscience is how the visual stimuli are functionally related to neural responses. This relationship is often explained by the notion of receptive fields, an approximated linear or quasi-linear filter that encodes the high dimensional visual stimuli into neural spikes. Traditional methods for estimating the filter do not efficiently exploit prior information about the structure of neural receptive fields. Here, we propose several approaches to design the prior distribution over the filter, considering the neurophysiological fact that receptive fields tend to be localized both in space-time and spatio-temporal frequency domain. To automatically regularize the estimation of neural receptive fields, we use the evidence optimization technique, a MAP (maximum a posteriori) estimation under a prior distribution whose parameters are set by maximizing the marginal likelihood. Simulation results show that the proposed methods can estimate the receptive field using datasets that are tens to hundreds of times smaller than those required by traditional methods. / text
62

Privacy Preserving Distributed Data Mining

Lin, Zhenmin 01 January 2012 (has links)
Privacy preserving distributed data mining aims to design secure protocols which allow multiple parties to conduct collaborative data mining while protecting the data privacy. My research focuses on the design and implementation of privacy preserving two-party protocols based on homomorphic encryption. I present new results in this area, including new secure protocols for basic operations and two fundamental privacy preserving data mining protocols. I propose a number of secure protocols for basic operations in the additive secret-sharing scheme based on homomorphic encryption. I derive a basic relationship between a secret number and its shares, with which we develop efficient secure comparison and secure division with public divisor protocols. I also design a secure inverse square root protocol based on Newton's iterative method and hence propose a solution for the secure square root problem. In addition, we propose a secure exponential protocol based on Taylor series expansions. All these protocols are implemented using secure multiplication and can be used to develop privacy preserving distributed data mining protocols. In particular, I develop efficient privacy preserving protocols for two fundamental data mining tasks: multiple linear regression and EM clustering. Both protocols work for arbitrarily partitioned datasets. The two-party privacy preserving linear regression protocol is provably secure in the semi-honest model, and the EM clustering protocol discloses only the number of iterations. I provide a proof-of-concept implementation of these protocols in C++, based on the Paillier cryptosystem.
63

A station-level analysis of rail transit ridership in Austin

Yang, Qiqian 30 September 2014 (has links)
Community and Regional Planning / In the past two decades, Austin has tremendous population growth, job opportunity in the downtown core and transportation challenges associated with that. Public transit, and particularly rail, often is regarded as a strategy to help reduce urban traffic congestion. The Urban Rail, which combines features of streetcars and light rail, is introduced into Austin as a new transit rail. The City of Austin, Capital Metro and Lone Star Rail are actively studying routing, financial, environmental and community elements associated with a first phase of Urban Rail. This thesis collected 2010 Origin and Destination Rail Transit Survey data from Capital Metropolitan Transportation Authority. The research focuses on the rail transit ridership. Two regression models are applied to analyze the factors influencing Austin rail transit ridership. One model is focusing on the socioeconomic characteristics. One model is focusing on the spatial factors. Our model shows that demographic factors have more significant effect than spatial factors. In addition, this work also tries to analyze the correlations between those factors and make recommendations based on the analysis result. / text
64

A General-Purpose GPU Reservoir Computer

Keith, Tūreiti January 2013 (has links)
The reservoir computer comprises a reservoir of possibly non-linear, possibly chaotic dynamics. By perturbing and taking outputs from this reservoir, its dynamics may be harnessed to compute complex problems at “the edge of chaos”. One of the first forms of reservoir computer, the Echo State Network (ESN), is a form of artificial neural network that builds its reservoir from a large and sparsely connected recurrent neural network (RNN). The ESN was initially introduced as an innovative solution to train RNNs which, up until that point, was a notoriously difficult task. The innovation of the ESN is that, rather than train the RNN weights, only the output is trained. If this output is assumed to be linear, then linear regression may be used. This work presents an effort to implement the Echo State Network, and an offline linear regression training method based on Tikhonov regularisation. This implementation targeted the general purpose graphics processing unit (GPU or GPGPU). The behaviour of the implementation was examined by comparing it with a central processing unit (CPU) implementation, and by assessing its performance against several studied learning problems. These assessments were performed using all 4 cores of the Intel i7-980 CPU and an Nvidia GTX480. When compared with a CPU implementation, the GPU ESN implementation demonstrated a speed-up starting from a reservoir size of between 512 and 1,024. A maximum speed-up of approximately 6 was observed at the largest reservoir size tested (2,048). The Tikhonov regularisation (TR) implementation was also compared with a CPU implementation. Unlike the ESN execution, the GPU TR implementation was largely slower than the CPU implementation. Speed-ups were observed at the largest reservoir and state history sizes, the largest of which was 2.6813. The learning behaviour of the GPU ESN was tested on three problems, a sinusoid, a Mackey-Glass time-series, and a multiple superimposed oscillator (MSO). The normalised root-mean squared errors of the predictors were compared. The best observed sinusoid predictor outperformed the best MSO predictor by 4 orders of magnitude. In turn, the best observed MSO predictor outperformed the best Mackey-Glass predictor by 2 orders of magnitude.
65

Exact Markov chain Monte Carlo and Bayesian linear regression

Bentley, Jason Phillip January 2009 (has links)
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
66

Utilizavimo proceso laiko eilučių modelis / Time series model for waste utilization

Michailova, Olga 30 June 2014 (has links)
Šiame darbe buvo atlikta gyvūninės kilmės atliekų utilizavimo proceso analizė. Pagrindinis uždavinys- rasti būdą prognozuoti utilizavimo proceso pabaigą ir tuo sumažinti energijos suvartojimą. Naudojausi laiko eilučių prognozavimo modeliu. Aprašiau savo metodą pasikeitimo taškui rasti. Taip pat buvo panaudota tiesinė regresija. Galimybė prognozuoti pasikeitimo tašką leistų žymiai sumažinti utilizavimo proceso savikainą. / I this work, an analysis of animal waste utilization process was performed. The main task was to find a way to predict the end of the desiccation process, because possibility to predict this end point may reduce energy consumption. I used the time series forcasting model and proposed method for the change point detection. Linear regression was also used for this task.
67

Odhady parametrů založené na zaokrouhlených datech / Estimates of parameters based on rounded data

Dortová, Zuzana January 2016 (has links)
This work discusses estimates based on rounded data. The work describes the estimates of parameters in time series AR and MA and in linear regression, the work presents different kinds of estimates based on rounded data. The work focuses on time series model AR(1) and linear regression, where simulations are added to theories and methods are compared on rounded and unrounded data. In addition, the comparison of linear regression is shown on the exemple of graph data. Powered by TCPDF (www.tcpdf.org)
68

Porovnání prediktorů / Comparing of predictors

Jelínková, Hana January 2011 (has links)
No description available.
69

Análise da evapotranspiração de referência a partir de medidas lisimétricas e ajuste estatístico de estimativas de nove equações empírico-teóricas com base na equação de Penman-Monteith / Analysis of the reference evapotranspiration from lysimetric data and statistical tuning of nine empiric equations based on the Penman-Monteith equation

Medeiros, Patrick Valverde 24 April 2008 (has links)
A quantificação da evapotranspiração é uma tarefa essencial para a determinação do balanço hídrico em uma bacia hidrográfica e para o estabelecimento do déficit hídrico de uma cultura. Nesse sentido, o presente trabalho aborda a análise da evapotranspiração de referência (ETo) para a região de Jaboticabal-SP. O comportamento do fenômeno na região foi estudado a partir da interpretação de dados de uma bateria de 12 lisímetros de drenagem (EToLis) e estimativas teóricas por 10 equações diferentes disponíveis na literatura. A análise estatística de correlação indica que as estimativas da ETo por equações teóricas comparadas à EToLis medida em lisímetro de drenagem não apresentaram bons índices de comparação e erro. Admitindo que a operação dos lisímetros não permitiu a determinação da ETo com boa confiabilidade, propôs-se um ajuste local das demais metodologias de estimativa da ETo, através de auto-regressão (AR) dos ruídos destas equações em comparação com uma média anual estimada pela equação de Penman-Monteith (EToPM), tomada como padrão, em períodos quinzenal e mensal. O ajuste através de regressão linear simples também foi analisado. Os resultados obtidos indicam que a radiação efetiva é a variável climática de maior importância para o estabelecimento da ETo na região. A estimativa pela equação de Penman-Monteith apresentou excelente concordância com as equações de Makkink (1957) e do balanço de energia. Os ajustes locais propostos apresentaram excelentes resultados para a maioria das equações testadas, dando-se destaque às equações da radiação solar FAO-24, de Makkink (1957), de Jensen-Haise (1963), de Camargo (1971), do balanço de radiação, de Turc (1961) e de Thornthwaite (1948). O ajuste por regressão linear simples é de mais fácil execução e apresentou excelentes resultados. / The quantification of the evapotranspiration is an essential task for the determination of the water balance in a watershed and for the establishment of the culture´s water deficit. Therefore, the present work describes the analysis of the reference evapotranspiration (ETo) for the region of Jaboticabal-SP. The phenomenon behavior in the region was studied based on the interpretation of 12 drainage lysimeters data (EToLis) and on theoretical estimates for 10 different equations available in the Literature. An statistical analysis indicated that the theoretical ETo estimates compared with the EToLis did not present good indices of comparison and error. Admitting that the lysimeters operation did not allow a reliable ETo determination, a local adjustment of the theoretical methodologies for ETo estimate was considered. An auto-regression (AR) of the noises of these equations in comparison with the annual average estimate for the Penman-Monteith equation (EToPM), taken as standard, has been performed in fortnightly and monthly periods. The adjustment through simple linear regression has also been analyzed. The obtained results indicate that the effective radiation is the most important climatic variable for the establishment of the ETo in the region. The Penman-Monteith estimate presented excellent correlation to the estimates by Makkink (1957) equation and the energy balance. The local adjustments presented excellent results for the majority of the tested equations, specially for the solar radiation FAO-24, Makkink (1957), Jensen-Haise (1963), Camargo (1971), radiation balance, Turc (1961) and Thornthwaite (1948) equations. The adjustment by simple linear regression is of easier execution and also presented excellent results.
70

Estudo do erro de posicionamento do eixo X em função da temperatura de um centro de usinagem / Study of the X axis error positioning in the function of the machining tool temperature

Nascimento, Cláudia Hespanholo 07 August 2015 (has links)
Na atual indústria de manufatura, destacam-se as empresas que sejam capazes de atender a demanda de produção de forma rápida e com produtos de qualidade. Durante a fabricação existem diversas fontes de erro que interferem na exatidão do processo de usinagem. Deste modo, torna-se importante o conhecimento destes erros para que técnicas de correção possam ser implementadas ao controle numérico da MF (máquina-ferramenta) e assim, melhorar a exatidão do processo. Neste contexto, o objetivo principal do trabalho é desenvolver uma metodologia para corrigir os erros de posicionamento do eixo X levando em consideração a variação de temperatura medida experimentalmente em pontos específicos da MF. Primeiramente foi realizado um levantamento dos erros de posicionamento experimentais ao longo do eixo X da MF em três diferentes condições de trabalho e simultaneamente havia um sistema para medir a variação de temperatura. Os dados foram tratados e em seguida sintetizados utilizando a metodologia das matrizes homogêneas de transformação, onde foi possível armazenar todos os erros de posicionamento referentes à trajetória da mesa da MF ao longo do eixo X. Os elementos da matriz resultante são utilizados como dados de entrada para análise de regressão linear múltipla que através dos métodos dos mínimos quadrados, correlaciona as variáveis de temperatura e erros de posicionamento. Como resultado, as equações lineares obtidas no método de análise de regressão geram valores previstos para os erros de posicionamento que são utilizados para correção destes erros. Estas equações exigem baixo custo computacional e portanto, podem ser futuramente implementadas no controle numérico da MF para corrigir os erros de posicionamento devido às deformações térmicas. Os resultados finais mostraram que erros de 60 µm foram reduzidos à 10 µm. Constatou-se a importância da sintetização dos erros de posicionamento nas matrizes homogêneas de transformação para aplica-los ao método de regressão. / In today\'s manufacturing industry, companies stand out if they\'re able to meet a high production demand efficiently and with quality products. During manufacturing there are several sources of error that can affect the accuracy of the machining process. Thus, it becomes important to better understand these errors to allow correction techniques to be implemented into the numerical control of the machine tool (MT) and thus improve process accuracy. In this context, the main goal of this work is to develop a method for correcting positioning errors along the X axis taking into consideration the variation in temperature, measured experimentally in specific points of the MT. First we conducted a survey of experimental positioning errors along the X axis of the MT in three different working conditions and simultaneously collecting temperature variation data. Data were treated and then synthesized using the methodology of homogeneous transformation matrices, where it was possible to store all positioning errors related to the trajectory of the board of the MT along the X axis. The elements of the matrix resulting from the homogeneous transformation are used as input data for the multiple linear regression analysis by the methods of least squares, which correlates the temperature variables with the positioning errors. As a result, linear equations obtained from the regression analysis method generates predicted values for the positioning errors which are used to correct this errors. These equations require low computer processing and therefore can be further implemented into the numerical control of the MT to correct positioning errors due to thermal deformation. The final results showed that 60 µm errors were reduced to 10 µm. It was noted the importance of synthesizing the positioning errors in homogeneous transformation matrices to apply them to the regression method.

Page generated in 0.1068 seconds