• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 837
  • 93
  • 87
  • 86
  • 34
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • Tagged with
  • 1521
  • 266
  • 261
  • 242
  • 213
  • 190
  • 188
  • 170
  • 169
  • 168
  • 163
  • 157
  • 147
  • 138
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Semi-Supervised Classification Using Gaussian Processes

Patel, Amrish 01 1900 (has links)
Gaussian Processes (GPs) are promising Bayesian methods for classification and regression problems. They have also been used for semi-supervised classification tasks. In this thesis, we propose new algorithms for solving semi-supervised binary classification problem using GP regression (GPR) models. The algorithms are closely related to semi-supervised classification based on support vector regression (SVR) and maximum margin clustering. The proposed algorithms are simple and easy to implement. Also, the hyper-parameters are estimated without resorting to expensive cross-validation technique. The algorithm based on sparse GPR model gives a sparse solution directly unlike the SVR based algorithm. Use of sparse GPR model helps in making the proposed algorithm scalable. The results of experiments on synthetic and real-world datasets demonstrate the efficacy of proposed sparse GP based algorithm for semi-supervised classification.
342

Integration-based Kalman-filtering for a Dynamic Generalized Linear Trend Model

Schnatter, Sylvia January 1991 (has links) (PDF)
The topic of the paper is filtering for non-Gaussian dynamic (state space) models by approximate computation of posterior moments using numerical integration. A Gauss-Hermite procedure is implemented based on the approximate posterior mode estimator and curvature recently proposed in 121. This integration-based filtering method will be illustrated by a dynamic trend model for non-Gaussian time series. Comparision of the proposed method with other approximations ([15], [2]) is carried out by simulation experiments for time series from Poisson, exponential and Gamma distributions. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
343

Methodology for global optimization of computationally expensive design problems

Koullias, Stefanos 20 September 2013 (has links)
The design of unconventional aircraft requires early use of high-fidelity physics-based tools to search the unfamiliar design space for optimum designs. Current methods for incorporating high-fidelity tools into early design phases for the purpose of reducing uncertainty are inadequate due to the severely restricted budgets that are common in early design as well as the unfamiliar design space of advanced aircraft. This motivates the need for a robust and efficient global optimization algorithm. This research presents a novel surrogate model-based global optimization algorithm to efficiently search challenging design spaces for optimum designs. The algorithm searches the design space by constructing a fully Bayesian Gaussian process model through a set of observations and then using the model to make new observations in promising areas where the global minimum is likely to occur. The algorithm is incorporated into a methodology that reduces failed cases, infeasible designs, and provides large reductions in the objective function values of design problems. Results on four sets of algebraic test problems are presented and the methodology is applied to an airfoil section design problem and a conceptual aircraft design problem. The method is shown to solve more nonlinearly constrained algebraic test problems than state-of-the-art algorithms and obtains the largest reduction in the takeoff gross weight of a notional 70-passenger regional jet versus competing design methods.
344

Degradation modeling for reliability analysis with time-dependent structure based on the inverse gaussian distribution / Modelagem de degradação para análise de confiabilidade com estrutura dependente do tempo baseada na distribuição gaussiana inversa

Morita, Lia Hanna Martins 07 April 2017 (has links)
Submitted by Aelson Maciera (aelsoncm@terra.com.br) on 2017-08-29T19:13:47Z No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-25T18:22:48Z (GMT) No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-09-25T18:22:55Z (GMT) No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) / Made available in DSpace on 2017-09-25T18:27:54Z (GMT). No. of bitstreams: 1 TeseLHMM.pdf: 2605456 bytes, checksum: b07c268a8fc9a1af8f14ac26deeec97e (MD5) Previous issue date: 2017-04-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Conventional reliability analysis techniques are focused on the occurrence of failures over time. However, in certain situations where the occurrence of failures is tiny or almost null, the estimation of the quantities that describe the failure process is compromised. In this context the degradation models were developed, which have as experimental data not the failure, but some quality characteristic attached to it. Degradation analysis can provide information about the components lifetime distribution without actually observing failures. In this thesis we proposed different methodologies for degradation data based on the inverse Gaussian distribution. Initially, we introduced the inverse Gaussian deterioration rate model for degradation data and a study of its asymptotic properties with simulated data. We then proposed an inverse Gaussian process model with frailty as a feasible tool to explore the influence of unobserved covariates, and a comparative study with the traditional inverse Gaussian process based on simulated data was made. We also presented a mixture inverse Gaussian process model in burn-in tests, whose main interest is to determine the burn-in time and the optimal cutoff point that screen out the weak units from the normal ones in a production row, and a misspecification study was carried out with the Wiener and gamma processes. Finally, we considered a more flexible model with a set of cutoff points, wherein the misclassification probabilities are obtained by the exact method with the bivariate inverse Gaussian distribution or an approximate method based on copula theory. The application of the methodology was based on three real datasets in the literature: the degradation of LASER components, locomotive wheels and cracks in metals. / As técnicas convencionais de análise de confiabilidade são voltadas para a ocorrência de falhas ao longo do tempo. Contudo, em determinadas situações nas quais a ocorrência de falhas é pequena ou quase nula, a estimação das quantidades que descrevem os tempos de falha fica comprometida. Neste contexto foram desenvolvidos os modelos de degradação, que possuem como dado experimental não a falha, mas sim alguma característica mensurável a ela atrelada. A análise de degradação pode fornecer informações sobre a distribuição de vida dos componentes sem realmente observar falhas. Assim, nesta tese nós propusemos diferentes metodologias para dados de degradação baseados na distribuição gaussiana inversa. Inicialmente, nós introduzimos o modelo de taxa de deterioração gaussiana inversa para dados de degradação e um estudo de suas propriedades assintóticas com dados simulados. Em seguida, nós apresentamos um modelo de processo gaussiano inverso com fragilidade considerando que a fragilidade é uma boa ferramenta para explorar a influência de covariáveis não observadas, e um estudo comparativo com o processo gaussiano inverso usual baseado em dados simulados foi realizado. Também mostramos um modelo de mistura de processos gaussianos inversos em testes de burn-in, onde o principal interesse é determinar o tempo de burn-in e o ponto de corte ótimo para separar os itens bons dos itens ruins em uma linha de produção, e foi realizado um estudo de má especificação com os processos de Wiener e gamma. Por fim, nós consideramos um modelo mais flexível com um conjunto de pontos de corte, em que as probabilidades de má classificação são estimadas através do método exato com distribuição gaussiana inversa bivariada ou em um método aproximado baseado na teoria de cópulas. A aplicação da metodologia foi realizada com três conjuntos de dados reais de degradação de componentes de LASER, rodas de locomotivas e trincas em metais.
345

Uma regra para a polarização de funções de base geradas pelo método da coordenada geradora / A rule for polarization of gaussian basis functions obtained with the generate coordinate method

Milena Palhares Maringolo 22 October 2010 (has links)
O Método da Coordenada Geradora Hartree-Fock Polinomial (pMCG-HF), desenvolvido por R.C. Barbosa e A.B.F. da Silva [1], é uma ferramenta matemática valiosa que permite gerar funções de base (também conhecidas como conjuntos de base). As funções de base geradas por este método têm um bom comportamento e são capazes de calcular valores precisos de propriedades eletrônicas moleculares. Porém, depois de gerar funções de base do hidrogênio até o flúor [2], fez-se necessário a adição de expoentes à função de base, correspondentes a cada átomo, para melhor adaptação à realização dos cálculos moleculares. Estas funções adicionais são o que chamamos de funções de polarização. A adição de funções de polarização, através de otimização computacional, é muito custosa, deste modo o desenvolvimento de uma regra de polarização para se esquivar desta otimização é de grande importância e por isso se transforma na beleza e no objetivo deste trabalho. Portanto, nesta dissertação, estudar-se-á um procedimento para escolher funções de polarização que reduza drasticamente o tempo computacional, no sentido de permitir uma seleção, mais simples, de expoentes da própria função de base primitiva para serem usadas nas funções de polarização p, d, f, g, etc. para a obtenção de propriedades moleculares calculadas através de métodos químico-quânticos / The polynomial generate coordinate method pGCM developed by R.C. Barbosa and A.B.F. da Silva [1] is an remarkble mathematic tool for the generation of basis functions (also known as basis sets). The basis sets generated from this method have a good behavior and are able to produce accurate values for electronic molecular properties. In fact, after generating a basis set [2] we need to add a set of exponent functions in order to better adequate a basis set to perform molecular calculations. These sets of additional functions are called polarizations functions. This work provides a methodology where the polarization functions are obtained from the initial basis set (the primitive set) without optimizing them separately by using optimization algorithms that are, computationally speaking, very costly. This procedure reduces drastically the computational time used to find polarization functions to be used in molecular quantum chemical calculations. Our methodology permits to choose the polarization functions directly from the primitive orbital exponents of each atomic symmetry s, p, d, f etc. in a very simple manner. The finding of polarization functions using our methodology was performed with several quantum chemical methods.
346

Dynamics of Driven Quantum Systems:

Baghery, Mehrdad 15 January 2018 (has links) (PDF)
This thesis explores the possibility of using parallel algorithms to calculate the dynamics of driven quantum systems prevalent in atomic physics. In this process, new as well as existing algorithms are considered. The thesis is split into three parts. In the first part an attempt is made to develop a new formalism of the time dependent Schroedinger equation (TDSE) in the hope that the new formalism could lead to a parallel algorithm. The TDSE is written as an eigenvalue problem, the ground state of which represents the solution to the original TDSE. Even though mathematically sound and correct, it turns out the ground state of this eigenvalue problem cannot be easily found numerically, rendering the original hope a false one. In the second part we borrow a Bayesian global optimisation method from the machine learning community in an effort to find the optimum conditions in different systems quicker than textbook optimisation algorithms. This algorithm is specifically designed to find the optimum of expensive functions, and is used in this thesis to 1. maximise the electron yield of hydrogen, 2. maximise the asymmetry in the photo-electron angular distribution of hydrogen, 3. maximise the higher harmonic generation yield within a certain frequency range, 4. generate short pulses via combining higher harmonics generated by hydrogen. In the last part, the phenomenon of dynamic interference (temporal equivalent of the double-slit experiment) is discussed. The necessary conditions are derived from first principles and it is shown where some of the previous analytical and numerical studies have gone wrong; it turns out the choice of gauge plays a crucial role. Furthermore, a number of different scenarios are presented where interference in the photo-electron spectrum is expected to occur.
347

On Maximizing The Performance Of The Bilateral Filter For Image Denoising

Kishan, Harini 03 1900 (has links) (PDF)
We address the problem of image denoising for additive white Gaussian noise (AWGN), Poisson noise, and Chi-squared noise scenarios. Thermal noise in electronic circuitry in camera hardware can be modeled as AWGN. Poisson noise is used to model the randomness associated with photon counting during image acquisition. Chi-squared noise statistics are appropriate in imaging modalities such as Magnetic Resonance Imaging (MRI). AWGN is additive, while Poisson noise is neither additive nor multiplicative. Although Chi-squared noise is derived from AWGN statistics, it is non-additive. Mean-square error (MSE) is the most widely used metric to quantify denoising performance. In parametric denoising approaches, the optimal parameters of the denoising function are chosen by employing a minimum mean-square-error (MMSE) criterion. However, the dependence of MSE on the noise-free signal makes MSE computation infeasible in practical scenarios. We circumvent the problem by adopting an MSE estimation approach. The ground-truth-independent estimates of MSE are Stein’s unbiased risk estimate (SURE), Poisson unbiased risk estimate (PURE) and Chi-square unbiased risk estimate (CURE) for AWGN, Poison and Chi-square noise models, respectively. The denoising function is optimized to achieve maximum noise suppression by minimizing the MSE estimates. We have chosen the bilateral filter as the denoising function. Bilateral filter is a nonlinear edge-preserving smoother. The performance of the bilateral filter is governed by the choice of its parameters, which can be optimized to minimize the MSE or its estimate. However, in practical scenarios, MSE cannot be computed due to inaccessibility of the noise-free image. We derive SURE, PURE, and CURE in the context of bilateral filtering and compute the parameters of the bilateral filter that yield the minimum cost (SURE/PURE/CURE). On processing the noisy input with bilateral filter whose optimal parameters are chosen by minimizing MSE estimates (SURE/PURE/CURE), we obtain the estimate closest to the ground truth. We denote the bilateral filter with optimal parameters as SURE-optimal bilateral filter (SOBF), PURE-optimal bilateral filter (POBF) and CURE-optimal bilateral filter (COBF) for AWGN, Poisson and Chi-Squared noise scenarios, respectively. In addition to the globally optimal bilateral filters (SOBF and POBF), we propose spatially adaptive bilateral filter variants, namely, SURE-optimal patch-based bilateral filter (SPBF) and PURE-optimal patch-based bilateral filter (PPBF). SPBF and PPBF yield significant improvements in performance and preserve edges better when compared with their globally-optimal counterparts, SOBF and POBF, respectively. We also propose the SURE-optimal multiresolution bilateral filter (SMBF) where we couple SOBF with wavelet thresholding. For Poisson noise suppression, we propose PURE-optimal multiresolution bilateral filter (PMBF), which is the Poisson counterpart of SMBF. We com-pare the performance of SMBF and PMBF with the state-of-the-art denoising algorithms for AWGN and Poisson noise, respectively. The proposed multiresolution-based bilateral filtering techniques yield denoising performance that is competent with that of the state-of-the-art techniques.
348

Constellation Constrained Capacity For Two-User Broadcast Channels

Deshpande, Naveen 01 1900 (has links) (PDF)
A Broadcast Channel is a communication path between a single source and two or more receivers or users. The source intends to communicate independent information to the users. A particular case of interest is the Gaussian Broadcast Channel (GBC) where the noise at each user is additive white Gaussian noise (AWGN). The capacity region of GBC is well known and the input to the channel is distributed as Gaussian. The capacity region of another special case of GBC namely Fading Broadcast Channel (FBC)was given in [Li and Goldsmith, 2001]and was shown that superposition of Gaussian codes is optimal for the FBC (treated as a vector degraded Broadcast Channel). The capacity region obtained when the input to the channel is distributed uniformly over a finite alphabet(Constellation)is termed as Constellation Constrained(CC) capacity region [Biglieri 2005]. In this thesis the CC capacity region for two-user GBC and the FBC are obtained. In case of GBC the idea of superposition coding with input from finite alphabet and CC capacity was explored in [Hupert and Bossert, 2007]but with some limitations. When the participating individual signal sets are nearly equal i.e., given total average power constraint P the rate reward α (also the power sharing parameter) is approximately equal to 0.5, we show via simulation that with rotation of one of the signal sets by an appropriate angle the CC capacity region is maximally enlarged. We analytically derive the expression for optimal angle of rotation. In case of FBC a heuristic power allocation procedure called finite-constellation power allocation procedure is provided through which it is shown (via simulation)that the ergodic CC capacity region thus obtained completely subsumes the ergodic CC capacity region obtained by allocating power using the procedure given in[Li and Goldsmith, 2001].It is shown through simulations that rotating one of the signal sets by an optimal angle (obtained by trial and error method)for a given α maximally enlarges the ergodic CC capacity region when finite-constellation power allocation is used. An expression for determining the optimal angle of rotation for the given fading state, is obtained. And the effect of rotation is maximum around the region corresponding to α =0.5. For both GBC and FBC superposition coding is done at the transmitter and successive decoding is carried out at the receivers.
349

A Review of Gaussian Random Matrices

Andersson, Kasper January 2020 (has links)
While many university students get introduced to the concept of statistics early in their education, random matrix theory (RMT) usually first arises (if at all) in graduate level classes. This thesis serves as a friendly introduction to RMT, which is the study of matrices with entries following some probability distribution. Fundamental results, such as Gaussian and Wishart ensembles, are introduced and a discussion of how their corresponding eigenvalues are distributed is presented. Two well-studied applications, namely neural networks and PCA, are discussed where we present how RMT can be applied / Medan många stöter på statistik och sannolikhetslära tidigt under sina universitetsstudier så är det sällan slumpmatristeori (RMT) dyker upp förän på forskarnivå. RMT handlar om att studera matriser där elementen följer någon sannolikhetsfördelning och den här uppsatsen presenterar den mest grundläggande teorin för slumpmatriser. Vi introducerar Gaussian ensembles, Wishart ensembles samt fördelningarna för dem tillhörande egenvärdena. Avslutningsvis så introducerar vi hur slumpmatriser kan användas i neruonnät och i PCA.
350

Dynamics of Driven Quantum Systems:: A Search for Parallel Algorithms

Baghery, Mehrdad 24 November 2017 (has links)
This thesis explores the possibility of using parallel algorithms to calculate the dynamics of driven quantum systems prevalent in atomic physics. In this process, new as well as existing algorithms are considered. The thesis is split into three parts. In the first part an attempt is made to develop a new formalism of the time dependent Schroedinger equation (TDSE) in the hope that the new formalism could lead to a parallel algorithm. The TDSE is written as an eigenvalue problem, the ground state of which represents the solution to the original TDSE. Even though mathematically sound and correct, it turns out the ground state of this eigenvalue problem cannot be easily found numerically, rendering the original hope a false one. In the second part we borrow a Bayesian global optimisation method from the machine learning community in an effort to find the optimum conditions in different systems quicker than textbook optimisation algorithms. This algorithm is specifically designed to find the optimum of expensive functions, and is used in this thesis to 1. maximise the electron yield of hydrogen, 2. maximise the asymmetry in the photo-electron angular distribution of hydrogen, 3. maximise the higher harmonic generation yield within a certain frequency range, 4. generate short pulses via combining higher harmonics generated by hydrogen. In the last part, the phenomenon of dynamic interference (temporal equivalent of the double-slit experiment) is discussed. The necessary conditions are derived from first principles and it is shown where some of the previous analytical and numerical studies have gone wrong; it turns out the choice of gauge plays a crucial role. Furthermore, a number of different scenarios are presented where interference in the photo-electron spectrum is expected to occur.

Page generated in 0.1122 seconds