• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 10
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 81
  • 24
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Essays on semi-parametric Bayesian econometric methods

Wu, Ruochen January 2019 (has links)
This dissertation consists of three chapters on semi-parametric Bayesian Econometric methods. Chapter 1 applies a semi-parametric method to demand systems, and compares the abilities to recover the true elasticities of different approaches to linearly estimating the widely used Almost Ideal demand model, by either iteration or approximation. Chapter 2 co-authored with Dr. Melvyn Weeks introduces a new semi-parametric Bayesian Generalized Least Square estimator, which employs the Dirichlet Process prior to cope with potential heterogeneity in the error distributions. Two methods are discussed as special cases of the GLS estimator, the Seemingly Unrelated Regression for equation systems, and the Random Effects Model for panel data, which can be applied to many fields such as the demand analysis in Chapter 1. Chapter 3 focuses on the subset selection for the efficiencies of firms, which addresses the influence of heterogeneity in the distributions of efficiencies on subset selections by applying the semi-parametric Bayesian Random Effects Model introduced in Chapter 2.
22

Kriging-based Approaches for the Probabilistic Analysis of Strip Footings Resting on Spatially Varying Soils

Thajeel, Jawad 08 December 2017 (has links)
L’analyse probabiliste des ouvrages géotechniques est généralement réalisée en utilisant la méthode de simulation de Monte Carlo. Cette méthode n’est pas adaptée pour le calcul des faibles probabilités de rupture rencontrées dans la pratique car elle devient très coûteuse dans ces cas en raison du grand nombre de simulations requises pour obtenir la probabilité de rupture. Dans cette thèse, nous avons développé trois méthodes probabilistes (appelées AK-MCS, AK-IS et AK-SS) basées sur une méthode d’apprentissage (Active learning) et combinant la technique de Krigeage et l’une des trois méthodes de simulation (i.e. Monte Carlo Simulation MCS, Importance Sampling IS ou Subset Simulation SS). Dans AK-MCS, la population est prédite en utilisant un méta-modèle de krigeage qui est défini en utilisant seulement quelques points de la population, ce qui réduit considérablement le temps de calcul par rapport à la méthode MCS. Dans AK-IS, une technique d'échantillonnage plus efficace 'IS' est utilisée. Dans le cadre de cette approche, la faible probabilité de rupture est estimée avec une précision similaire à celle de AK-MCS, mais en utilisant une taille beaucoup plus petite de la population initiale, ce qui réduit considérablement le temps de calcul. Enfin, dans AK-SS, une technique d'échantillonnage plus efficace 'SS' est proposée. Cette technique ne nécessite pas la recherche de points de conception et par conséquent, elle peut traiter des surfaces d’état limite de forme arbitraire. Toutes les trois méthodes ont été appliquées au cas d'une fondation filante chargée verticalement et reposant sur un sol spatialement variable. Les résultats obtenus sont présentés et discutés. / The probabilistic analysis of geotechnical structures involving spatially varying soil properties is generally performed using Monte Carlo Simulation methodology. This method is not suitable for the computation of the small failure probabilities encountered in practice because it becomes very time-expensive in such cases due to the large number of simulations required to calculate accurate values of the failure probability. Three probabilistic approaches (named AK-MCS, AK-IS and AK-SS) based on an Active learning and combining Kriging and one of the three simulation techniques (i.e. Monte Carlo Simulation MCS, Importance Sampling IS or Subset Simulation SS) were developed. Within AK-MCS, a Monte Carlo simulation without evaluating the whole population is performed. Indeed, the population is predicted using a kriging meta-model which is defined using only a few points of the population thus significantly reducing the computation time with respect to the crude MCS. In AK-IS, a more efficient sampling technique ‘IS’ is used instead of ‘MCS’. In the framework of this approach, the small failure probability is estimated with a similar accuracy as AK-MCS but using a much smaller size of the initial population, thus significantly reducing the computation time. Finally, in AK-SS, a more efficient sampling technique ‘SS’ is proposed. This technique overcomes the search of the design points and thus it can deal with arbitrary shapes of the limit state surfaces. All the three methods were applied to the case of a vertically loaded strip footing resting on a spatially varying soil. The obtained results are presented and discussed.
23

"Abordagem genética para seleção de um conjunto reduzido de características para construção de ensembles de redes neurais: aplicação à língua eletrônica" / A genetic approach to feature subset selection for construction of neural network ensembles: an application to gustative sensors

Ednaldo José Ferreira 10 August 2005 (has links)
As características irrelevantes, presentes em bases de dados de diversos domínios, deterioram a acurácia de predição de classificadores induzidos por algoritmos de aprendizado de máquina. As bases de dados geradas por uma língua eletrônica são exemplos típicos onde a demasiada quantidade de características irrelevantes e redundantes prejudicam a acurácia dos classificadores induzidos. Para lidar com este problema, duas abordagens podem ser utilizadas. A primeira é a utilização de métodos para seleção de subconjuntos de características. A segunda abordagem é por meio de ensemble de classificadores. Um ensemble deve ser constituído por classificadores diversos e acurados. Uma forma efetiva para construção de ensembles de classificadores é por meio de seleção de características. A seleção de características para ensemble tem o objetivo adicional de encontrar subconjuntos de características que promovam acurácia e diversidade de predição nos classificadores do ensemble. Algoritmos genéticos são técnicas promissoras para seleção de características para ensemble. No entanto, a busca genética, assim como outras estratégias de busca, geralmente visam somente a construção do ensemble, permitindo que todas as características (relevantes, irrelevantes e redundantes) sejam utilizadas. Este trabalho apresenta uma abordagem baseada em algoritmos genéticos para construção de ensembles de redes neurais artificiais com um conjunto reduzido das características totais. Para melhorar a acurácia dos ensembles, duas abordagens diferenciadas para treinamento de redes neurais foram utilizadas. A primeira baseada na interrupção precoce do treinamento com o algoritmo back-propagation e a segunda baseada em otimização multi-objetivo. Os resultados obtidos comprovam a eficácia do algoritmo proposto para construção de ensembles de redes neurais acurados. Também foi constatada sua eficiência na redução das características totais, comprovando que o algoritmo proposto é capaz de construir um ensemble utilizando um conjunto reduzido de características. / The irrelevant features in databases of some domains spoil the accuracy of the classifiers induced by machine learning algorithms. Databases generated by an electronic tongue are examples where the huge quantity of irrelevant and redundant features spoils the accuracy of classifiers. There are basically two approaches to deal with this problem: feature subset selection and ensemble of classifiers. A good ensemble is composed by accurate and diverse classifiers. An effective way to construct ensembles of classifiers is to make it through feature selection. The ensemble feature selection has an additional objective: to find feature subsets to promote accuracy and diversity in the ensemble of classifiers. Genetic algorithms are promising techniques for ensemble feature selection. However, genetic search, as well as other search strategies, only aims the ensemble construction, allowing the selection of all features (relevant, irrelevant and redundant). This work proposes an approach based on genetic algorithm to construct ensembles of neural networks using a reduced feature subset of totality. Two approaches were used to train neural networks to improve the ensembles accuracy. The first is based on early stopping with back-propagation algorithm and the second is based on multi-objective optimization. The results show the effectiveness and accuracy of the proposed algorithm to construct ensembles of neural networks, and also, its efficiency in the reduction of total features was evidenced, proving its capacity for constructing an ensemble using a reduced feature subset.
24

Selecting the Best Linear Model From a Subset of All Possible Models for a Given Set of Predictors in a Multiple Linear Regression Analysis

Jensen, David L. 01 May 1972 (has links)
Sixteen "model building" and "model selection" procedures commonly encountered in industry, all of which were initially alleged to be capable of identifying the best model from the collection of 2k possible linear models corresponding to a given set of k predictors in a multiple linear regression analysis, were individually summarized and subsequently evaluated by considering their comparative advantages and limitations from both a theoretical and a practical standpoint. It was found that none of the proposed procedures were absolutely infallible and that several were actually unsuitable. However, it was also found that most of these techniques could still be profitably employed by the analyst, and specific directional guidelines were recommended for their implementation in a proper analysis. Furthermore, the specific role of the analyst in a multiple linear regression application was clearly defined in a practical sense.
25

A Java Framework for Broadcast Encryption Algorithms / Ett ramverk i Java för prestandatest av broadcast-krypteringsalgoritmer

Hesselius, Tobias, Savela, Tommy January 2004 (has links)
<p>Broadcast encryption is a fairly new area in cryptology. It was first addressed in 1992, and the research in this area has been large ever since. In short, broadcast encryption is used for efficient and secure broadcasting to an authorized group of users. This group can change dynamically, and in some cases only one-way communication between the sender and receivers is available. An example of this is digital TV transmissions via satellite, in which only the paying customers can decrypt and view the broadcast. </p><p>The purpose of this thesis is to develop a general Java framework for implementation and performance analysis of broadcast encryption algorithms. In addition to the actual framework a few of the most common broadcast encryption algorithms (Complete Subtree, Subset Difference, and the Logical Key Hierarchy scheme) have been implemented in the system. </p><p>This master’s thesis project was defined by and carried out at the Information Theory division at the Department of Electrical Engineering (ISY), Linköping Institute of Technology, during the first half of 2004.</p>
26

Basis Reduction Algorithms and Subset Sum Problems

LaMacchia, Brian A. 01 June 1991 (has links)
This thesis investigates a new approach to lattice basis reduction suggested by M. Seysen. Seysen's algorithm attempts to globally reduce a lattice basis, whereas the Lenstra, Lenstra, Lovasz (LLL) family of reduction algorithms concentrates on local reductions. We show that Seysen's algorithm is well suited for reducing certain classes of lattice bases, and often requires much less time in practice than the LLL algorithm. We also demonstrate how Seysen's algorithm for basis reduction may be applied to subset sum problems. Seysen's technique, used in combination with the LLL algorithm, and other heuristics, enables us to solve a much larger class of subset sum problems than was previously possible.
27

A New Generation of Mixture-Model Cluster Analysis with Information Complexity and the Genetic EM Algorithm

Howe, John Andrew 01 May 2009 (has links)
In this dissertation, we extend several relatively new developments in statistical model selection and data mining in order to improve one of the workhorse statistical tools - mixture modeling (Pearson, 1894). The traditional mixture model assumes data comes from several populations of Gaussian distributions. Thus, what remains is to determine how many distributions, their population parameters, and the mixing proportions. However, real data often do not fit the restrictions of normality very well. It is likely that data from a single population exhibiting either asymmetrical or nonnormal tail behavior could be erroneously modeled as two populations, resulting in suboptimal decisions. To avoid these pitfalls, we develop the mixture model under a broader distributional assumption by fitting a group of multivariate elliptically-contoured distributions (Anderson and Fang, 1990; Fang et al., 1990). Special cases include the multivariate Gaussian and power exponential distributions, as well as the multivariate generalization of the Student’s T. This gives us the flexibility to model nonnormal tail and peak behavior, though the symmetry restriction still exists. The literature has many examples of research generalizing the Gaussian mixture model to other distributions (Farrell and Mersereau, 2004; Hasselblad, 1966; John, 1970a), but our effort is more general. Further, we generalize the mixture model to be non-parametric, by developing two types of kernel mixture model. First, we generalize the mixture model to use the truly multivariate kernel density estimators (Wand and Jones, 1995). Additionally, we develop the power exponential product kernel mixture model, which allows the density to adjust to the shape of each dimension independently. Because kernel density estimators enforce no functional form, both of these methods can adapt to nonnormal asymmetric, kurtotic, and tail characteristics. Over the past two decades or so, evolutionary algorithms have grown in popularity, as they have provided encouraging results in a variety of optimization problems. Several authors have applied the genetic algorithm - a subset of evolutionary algorithms - to mixture modeling, including Bhuyan et al. (1991), Krishna and Murty (1999), and Wicker (2006). These procedures have the benefit that they bypass computational issues that plague the traditional methods. We extend these initialization and optimization methods by combining them with our updated mixture models. Additionally, we “borrow” results from robust estimation theory (Ledoit and Wolf, 2003; Shurygin, 1983; Thomaz, 2004) in order to data-adaptively regularize population covariance matrices. Numerical instability of the covariance matrix can be a significant problem for mixture modeling, since estimation is typically done on a relatively small subset of the observations. We likewise extend various information criteria (Akaike, 1973; Bozdogan, 1994b; Schwarz, 1978) to the elliptically-contoured and kernel mixture models. Information criteria guide model selection and estimation based on various approximations to the Kullback-Liebler divergence. Following Bozdogan (1994a), we use these tools to sequentially select the best mixture model, select the best subset of variables, and detect influential observations - all without making any subjective decisions. Over the course of this research, we developed a full-featured Matlab toolbox (M3) which implements all the new developments in mixture modeling presented in this dissertation. We show results on both simulated and real world datasets. Keywords: mixture modeling, nonparametric estimation, subset selection, influence detection, evidence-based medical diagnostics, unsupervised classification, robust estimation.
28

Sequential Design of Experiments to Estimate a Probability of Failure.

Li, Ling 16 May 2012 (has links) (PDF)
This thesis deals with the problem of estimating the probability of failure of a system from computer simulations. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited, which is incompatible with the use of classical Monte Carlo methods. In fact, estimating a small probability of failure with very few simulations, as required in some complex industrial problems, is a particularly difficult topic. A classical approach consists in replacing the expensive-to-simulate model with a surrogate model that will use little computer resources. Using such a surrogate model, two operations can be achieved. The first operation consists in choosing a number, as small as possible, of simulations to learn the regions in the parameter space of the system that will lead to a failure of the system. The second operation is about constructing good estimators of the probability of failure. The contributions in this thesis consist of two parts. First, we derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. Second, we propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on Gaussian process modeling. The new strategies are supported by numerical results from several benchmark examples in reliability analysis. The methods proposed show good performances compared to methods of the literature.
29

A Survey On Known Algorithms In Solving Generalizationbirthday Problem (k-list)

Namaziesfanjani, Mina 01 February 2013 (has links) (PDF)
A well known birthday paradox is one the most important problems in cryptographic applications. Incremental hash functions or digital signatures in public key cryptography and low-weight parity check equations of LFSRs in stream ciphers are examples of such applications which benet from birthday problem theories to run their attacks. Wagner introduced and formulated the k-dimensional birthday problem and proposed an algorithm to solve the problem in O(k.m^ 1/log k ). The generalized birthday solutions used in some applications to break Knapsack based systems or collision nding in hash functions. The optimized birthday algorithms can solve Knapsack problems of dimension n which is believed to be NP-hard. Its equivalent problem is Subset Sum Problem nds the solution over Z/mZ. The main property for the classication of the problem is density. When density is small enough the problem reduces to shortest lattice vector problem and has a solution in polynomial time. Assigning a variable to each element of the lists, decoding them into a matrix and considering each row of the matrix as an equation lead us to have a multivariate polynomial system of equations and all solution of this type can be a solution for the k- list problem such as F4, F5, another strategy called eXtended Linearization (XL) and sl. We discuss the new approaches and methods proposed to reduce the complexity of the algorithms. For particular cases in over-determined systems, more equations than variables, regarding to have a single solutions Wolf and Thomea work to make a gradual decrease in the complexity of F5. Moreover, his group try to solve the problem by monomials of special degrees and linear equations for small lists. We observe and compare all suggested methods in this
30

ベイス推定に基づく音楽アライメント / Bayesian Music Alignment

前澤, 陽 23 March 2015 (has links)
Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19106号 / 情博第552号 / 新制||情||98 / 32057 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 河原 達也, 教授 田中 利幸, 講師 吉井 和佳 / 学位規則第4条第1項該当

Page generated in 0.0476 seconds