• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 356
  • 96
  • 73
  • 47
  • 26
  • 20
  • 18
  • 12
  • 10
  • 8
  • 6
  • 5
  • 3
  • 2
  • 2
  • Tagged with
  • 814
  • 279
  • 221
  • 200
  • 173
  • 131
  • 121
  • 96
  • 91
  • 88
  • 85
  • 72
  • 67
  • 67
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Views 2 : reflections on Views

Mason, Jonathan Eli. 10 April 2008 (has links)
No description available.
82

Spectral Probablistic Modeling and Applications to Natural Language Processing

Parikh, Ankur 01 August 2015 (has links)
Probabilistic modeling with latent variables is a powerful paradigm that has led to key advances in many applications such natural language processing, text mining, and computational biology. Unfortunately, while introducing latent variables substantially increases representation power, learning and modeling can become considerably more complicated. Most existing solutions largely ignore non-identifiability issues in modeling and formulate learning as a nonconvex optimization problem, where convergence to the optimal solution is not guaranteed due to local minima. In this thesis, we propose to tackle these problems through the lens of linear/multi-linear algebra. Viewing latent variable models from this perspective allows us to approach key problems such as structure learning and parameter learning using tools such as matrix/tensor decompositions, inversion, and additive metrics. These new tools enable us to develop novel solutions to learning in latent variable models with theoretical and practical advantages. For example, our spectral parameter learning methods for latent trees and junction trees are provably consistent, local-optima-free, and 1-2 orders of magnitude faster thanEMfor large sample sizes. In addition, we focus on applications in Natural Language Processing, using our insights to not only devise new algorithms, but also to propose new models. Our method for unsupervised parsing is the first algorithm that has both theoretical guarantees and is also practical, performing favorably to theCCMmethod of Klein and Manning. We also developed power low rank ensembles, a framework for language modeling that generalizes existing n-gram techniques to non-integer n. It consistently outperforms state-of-the-art Kneser Ney baselines and can train on billion-word datasets in a few hours.
83

A study of the prediction performance and multivariate extensions of the horseshoe estimator

Yunfan Li (6624032) 14 May 2019 (has links)
The horseshoe prior has been shown to successfully handle high-dimensional sparse estimation problems. It both adapts to sparsity efficiently and provides nearly unbiased estimates for large signals. In addition, efficient sampling algorithms have been developed and successively applied to a vast array of high-dimensional sparse estimation problems. In this dissertation, we investigate the prediction performance of the horseshoe prior in sparse regression, and extend the horseshoe prior to two multivariate settings.<br><br>We begin with a study of the finite sample prediction performance of shrinkage regression methods, where the risk can be unbiasedly estimated using Stein's approach. We show that the horseshoe prior achieves an improved prediction risk over global shrinkage rules, by using a component-specific local shrinkage term that is learned from the data under a heavy-tailed prior, in combination with a global term providing shrinkage towards zero. We demonstrate improved prediction performance in a simulation study and in a pharmacogenomics data set, confirming our theoretical findings.<br><br>We then shift to extending the horseshoe prior to handle two high-dimensional multivariate problems. First, we develop a new estimator of the inverse covariance matrix for high-dimensional multivariate normal data. The proposed graphical horseshoe estimator has attractive properties compared to other popular estimators. The most prominent benefit is that when the true inverse covariance matrix is sparse, the graphical horseshoe estimator provides estimates with small information divergence from the sampling model. The posterior mean under the graphical horseshoe prior can also be almost unbiased under certain conditions. In addition to these theoretical results, we provide a full Gibbs sampler for implementation. The graphical horseshoe estimator compares favorably to existing techniques in simulations and in a human gene network data analysis.<br><br>In our second setting, we apply the horseshoe prior to the joint estimation of regression coefficients and the inverse covariance matrix in normal models. The computational challenge in this problem is due to the dimensionality of the parameter space that routinely exceeds the sample size. We show that the advantages of the horseshoe prior in estimating a mean vector, or an inverse covariance matrix, separately are also present when addressing both simultaneously. We propose a full Bayesian treatment, with a sampling algorithm that is linear in the number of predictors. Extensive performance comparisons are provided with both frequentist and Bayesian alternatives, and both estimation and prediction performances are verified on a genomic data set.
84

Formulação do método dos elementos de contorno para análise de chapas com enrijecedores / Formulation of the boundary elements method for analysis of stiffned plates

Wutzow, Wilson Wesley 23 May 2003 (has links)
Neste trabalho, a formulação linear do método dos elementos de contorno - MEC, para elasticidade bidimensional, é empregada para estudo de domínios enrijecidos sendo os enrijecedores abordados de duas formas, a primeira muito conhecida trata-se da técnica de sub-região ou acoplamento MEC/MEC e a segunda também pelo mesmo tipo de acoplamento, mas agora condensando-se as variáveis do contorno para a linha central do enrijecedor. Esta técnica juntamente com a integração completamente analítica dos termos da equação Somigliana proporcionam bons resultados eliminando perturbações em enrijecedores finos. Com o intuito de obter melhores resultados aplica-se ainda a técnica de suavização do contorno por mínimos quadrados. Aspectos gráficos são abordados na criação do pré e pós processador, sendo o pré-processador um interpretador de arquivos de formato dxf e o pós-processador destinado a representação gráfica dos resultados através de mapas e isolinhas de tensão, deformação e deslocamentos. / In this work, the boundary element method - BEM, for two-dimensional elasticity, is used to analyse reinforced domains. The rigidity stiffener contribution are taking into account by two different ways: the first one by the sub-region technique or BEM/BEM coupling, and the second one, also based on BEM/BEM coupling, but now considering the variables defined along the central line of the stiffer. Analytical expressions were found to perform the integrals along boundary and interface elements, providing very good results characterized by the elimination of some perturbations that might occur when using stiffener with small rigidity. In order to obtains better and smooth results the least square method was used. Pre- and post-processors were developed and implemented for visualization of the input data and the final results. The pre-processor was written as a dxf reader program, while the post-processor as a counter map stress, strain and displacement iso-lines.
85

Maximum Likelihood Identification of an Information Matrix Under Constraints in a Corresponding Graphical Model

Li, Nan 22 January 2017 (has links)
We address the problem of identifying the neighborhood structure of an undirected graph, whose nodes are labeled with the elements of a multivariate normal (MVN) random vector. A semi-definite program is given for estimating the information matrix under arbitrary constraints on its elements. More importantly, a closed-form expression is given for the maximum likelihood (ML) estimator of the information matrix, under the constraint that the information matrix has pre-specified elements in a given pattern (e.g., in a principal submatrix). The results apply to the identification of dependency labels in a graphical model with neighborhood constraints. This neighborhood structure excludes nodes which are conditionally independent of a given node and the graph is determined by the non- zero elements in the information matrix for the random vector. A cross-validation principle is given for determining whether the constrained information matrix returned from this procedure is an acceptable model for the information matrix, and as a consequence for the neighborhood structure of the Markov Random Field (MRF) that is identified with the MVN random vector.
86

Programming paradigms, information types and graphical representations : empirical investigations of novice program comprehension

Good, Judith January 1999 (has links)
This thesis describes research into the role of various factors in novice program comprehension, including the underlying programming paradigm, the representational features of the programming language, and the various types of information which can be derived from the program. The main postulate of the thesis is that there is no unique method for understanding programs, and that program comprehension will be influenced by, among other things, the way in which programs are represented, both semantically and syntactically. This idea has implications for the learning of programming, particularly in terms of how theses concepts should be embodied. The thesis is focused around three empirical studies. The first study, based on th so-called "information types" studies, challenged the idea that program comprehension is an invariant process over languages, and suggested that programming language will have a differential effect on comprehension, as evidenced by the types of information which novices are able to extract from a program. Despite the use of a markedly different language from earliier studies, the results were broadly similar. However, it was suggested that there are other factors additional to programming notation which intervene in the comprehension process, and which cannot be discounted. Furthermore, the study highlighted the need to tie the hypotheses about information extraction more closely to the programming paradigm. The second study introduced a graphical component into the investigation, and looked at the way in which visual representations of programs combine with programming paradigm to influence comprehension. The mis-match conjecture, which suggests that tasks requiring information which is highlighted by a notation will be facilitated relative to tasks where the information must be inferred, was applied to programming paradigm. The study showed that the mis-match effect can be overridden by other factors, most notably subjects' prior experience and the programming culture in which they are taught. The third study combined the methodologies of the first two studies to look at the mis-match conjecture within the wider context of information types. Using graphical representations of the control flow and data flow paradigms, it showed that, despite a bias toward one paradigm based on prior experience and culture, programming paradigm does influence the way in which the program is understood, resulting in improved performance on tasks requiring information which the paradigm is hypothesised to highlight. Furthermore, this effect extends to groups of information which could be said to be theoretically related to the information being highlighted. The thesis also proposes a new and more precise methodology for the analysis of students' accounts of their comprehension of a program, a form a data which is typically derived from the information types studies. It then shows how an analysis of this qualitative data can be used to provide further support for the quantitative results. Finally, the thesis suggests how the core results could be used to develop computer based support environments for novice visual programming, and provides other suggestions for further work.
87

Development of GUI test coverage analysis and enforcement tools

Ferreira, Ricardo Daniel Ferreira January 2009 (has links)
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 2009
88

Reverse engineering of GUI models

Grilo, André Macedo Pinto January 2009 (has links)
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 2009
89

An analysis of the military engineering logistics planning problem

Denham, David R., n/a January 1982 (has links)
Logistics is defined in the Concise Oxford Dictionary as "the art of moving and quartering troops, and supplying and maintaining a fleet". While this definition is rather narrow, it nevertheless gives a general guide to the broad military support field known as logistics. This thesis is concerned with one of the more complex military logistics problems - namely the allocation of men, equipment and materiel to the engineer tasks associated with the movement, maintenance and support of military forces in a theatre of operations. The major factors are: a. tasks to be carried out by engineers; b. the number and type of available engineer construction units. c. stores and transport constraints imposed by other agencies in the Defence logistics system; d. deadlines imposed by the Commander and his staff; e. efficiency and proficiency of engineer constructions units in carrying out particular tasks; f. risk, including (1) possible enemy action (both directly against units, or indirectly against other elements in the logistics system) (2) adverse weather (both direct and indirect) Current military procedures for solving this type of problem are based on practices developed during World War II, and rely heavily on support from external sources (this was the case throughout the conflicts in Korea, Malaya and Vietnam). The recent change in Government defence policy requiring greater reliance on our own resources has meant that new solutions have had to be found to old problems. The aim of this thesis is therefore three-fold: a. to analyse the problem in terms of its various components; b. to develop a method whereby the problem can be solved manually in an efficient manner (but still considering all the relevant factors); and c. to develop a method whereby the solution to the problem can be optimized, using computers where necessary. Mathematical equations are developed for all factors in the engineer logistics planning problem, and a graphical technique is developed which enables a solution to the problem to be found quickly using manual methods. The approach to-the development of the graphical technique is based on some ideas presented by V.V. Kolbin in his book "Stochastic Programming".
90

High Resolution Tiled Displays at the University of Maine

Bourgoin, Nathan January 2010 (has links) (PDF)
No description available.

Page generated in 0.03 seconds