• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 32
  • 22
  • 12
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 229
  • 229
  • 111
  • 41
  • 40
  • 37
  • 35
  • 34
  • 32
  • 27
  • 25
  • 24
  • 23
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A new adaptive multiscale finite element method with applications to high contrast interface problems

Millward, Raymond January 2011 (has links)
In this thesis we show that the finite element error for the high contrast elliptic interface problem is independent of the contrast in the material coefficient under certain assumptions. The error estimate is proved using a particularly technical proof with construction of a specific function from the finite dimensional space of piecewise linear functions. We review the multiscale finite element method of Chu, Graham and Hou to give clearer insight. We present some generalisations to extend their work on a priori contrast independent local boundary conditions, which are then used to find multiscale basis functions by solving a set of local problems. We make use of their regularity result to prove a new relative error estimate for both the standard finte element method and the multiscale finite element method that is completely coefficient independent. The analytical results we explore in this thesis require a complicated construction. To avoid this we present an adaptive multiscale finite element method as an enhancement to the adaptive local-global method of Durlofsky, Efendiev and Ginting. We show numerically that this adaptive method converges optimally as if the coefficient were smooth even in the presence of singularities as well as in the case of a realisation of a random field. The novel application of this thesis is where the adaptive multiscale finite element method has been applied to the linear elasticity problem arising from the structural optimisation process in mechanical engineering. We show that a much smoother sensitivity profile is achieved along the edges of a structure with the adaptive method and no additional heuristic smoothing techniques are needed. We finally show that the new adaptive method can be efficiently implemented in parallel and the processing time scales well as the number of processors increases. The biggest advantage of the multiscale method is that the basis functions can be repeatedly used for additional problems with the same high contrast material coefficient.
72

Extração de informações de conferências em páginas web

Garcia, Cássio Alan January 2017 (has links)
A escolha da conferência adequada para o envio de um artigo é uma tarefa que depende de diversos fatores: (i) o tema do trabalho deve estar entre os temas de interesse do evento; (ii) o prazo de submissão do evento deve ser compatível com tempo necessário para a escrita do artigo; (iii) localização da conferência e valores de inscrição são levados em consideração; e (iv) a qualidade da conferência (Qualis) avaliada pela CAPES. Esses fatores aliados à existência de milhares de conferências tornam a busca pelo evento adequado bastante demorada, em especial quando se está pesquisando em uma área nova. A fim de auxiliar os pesquisadores na busca de conferências, o trabalho aqui desenvolvido apresenta um método para a coleta e extração de dados de sites de conferências. Essa é uma tarefa desafiadora, principalmente porque cada conferência possui seu próprio site, com diferentes layouts. O presente trabalho apresenta um método chamado CONFTRACKER que combina a identificação de URLs de conferências da Tabela Qualis à identificação de deadlines a partir de seus sites. A extração das informações é realizada independente da conferência, do layout do site e da forma como são apresentadas as datas (formatação e rótulos). Para avaliar o método proposto, foram realizados experimentos com dados reais de conferências da Ciência da Computação. Os resultados mostraram que CONFTRACKER obteve resultados significativamente melhores em relação a um baseline baseado na posição entre rótulos e datas. Por fim, o processo de extração é executado para todas as conferências da Tabela Qualis e os dados coletados populam uma base de dados que pode ser consultada através de uma interface online. / Choosing the most suitable conference to submit a paper is a task that depends on various factors: (i) the topic of the paper needs to be among the topics of interest of the conference; (ii) submission deadlines need to be compatible with the necessary time for paper writing; (iii) conference location and registration costs; and (iv) the quality or impact of the conference. These factors allied to the existence of thousands of conferences, make the search of the right event very time consuming, especially when researching in a new area. Intending to help researchers finding conferences, this work presents a method developed to retrieve and extract data from conference web sites. Our method combines the identification of conference URL and deadline extraction. This is a challenging task as each web site has its own layout. Here, we propose CONFTRACKER, which combines the identification of the URLs of conferences listed in the Qualis Table and the extraction of their deadlines. Information extraction is carried out independent from the page’s layout and how the dates are presented. To evaluate our proposed method, we carried out experiments with real web data from Computer Science conferences. The results show that CONFTRACKER outperformed a baseline method based on the position of labels and dates. Finaly, the extracted data is stored in a database to be searched with an online tool.
73

Probabilistic models for information extraction: from cascaded approach to joint approach. / CUHK electronic theses & dissertations collection

January 2010 (has links)
Based on these observations and analysis, we propose a joint discriminative probabilistic framework to optimize all relevant subtasks simultaneously. This framework defines a joint probability distribution for both segmentations in sequence data and relations of segments in the form of an exponential family. This model allows tight interactions between segmentations and relations of segments and it offers a natural way for IE tasks. Since exact parameter estimation and inference are prohibitively intractable, a structured variational inference algorithm is developed to perform parameter estimation approximately. For inference, we propose a strong bi-directional MH approach to find the MAP assignments for joint segmentations and relations to explore mutual benefits on both directions, such that segmentations can aid relations, and vice-versa. / Information Extraction (IE) aims at identifying specific pieces of information (data) in a unstructured or semi-structured textual document and transforming unstructured information in a corpus of documents or Web pages into a structured database. There are several representative tasks in IE: named entity recognition (NER), which aims at identifying phrases that denote types of named entities, entity relation extraction, which aims at discovering the events or relations related to the entities, and the task of coreference resolution, aims at determining whether two extracted mentions of entities refer to the same object. IE is useful for a wide variety of applications. / The end-to-end performance of high-level IE systems for compound tasks is often hampered by the use of cascaded frameworks. The integrated model we proposed can alleviate some of these problems, but it is only loosely coupled. Parameter estimation is performed independently and it only allows information to flow in one direction. In this top-down integration model, the decision of the bottom sub-model could guide the decision of the upper sub-model, but not vice-versa. Thus, deep interactions and dependencies between different tasks can hardly be well captured. / We have investigated and developed a cascaded framework in an attempt to consider entity extraction and qualitative domain knowledge based on undirected, discriminatively-trained probabilistic graphical models. This framework consists of two stages and it is the combination of statistical learning and first-order logic. As a pipeline model, the first stage is a base model and the second stage is used to validate and correct the errors made in the base model. We incorporated domain knowledge that can be well formulated into first-order logic to extract entity candidates from the base model. We have applied this framework and achieved encouraging results in Chinese NER on the People's Daily corpus. / We perform extensive experiments on three important IE tasks using real-world datasets, namely Chinese NER, entity identification and relationship extraction from Wikipedia's encyclopedic articles, and citation matching, to test our proposed models, including the bidirectional model, the integrated model, and the joint model. Experimental results show that our models significantly outperform current state-of-the-art probabilistic models, such as decoupled and joint models, illustrating the feasibility and promise of our proposed approaches. (Abstract shortened by UMI.) / We present a general, strongly-coupled, and bidirectional architecture based on discriminatively trained factor graphs for information extraction, which consists of two components---segmentation and relation. First we introduce joint factors connecting variables of relevant subtasks to capture dependencies and interactions between them. We then propose a strong bidirectional Markov chain Monte Carlo (MCMC) sampling inference algorithm which allows information to flow in both directions to find the approximate maximum a posteriori (MAP) solution for all subtasks. Notably, our framework is considerably simpler to implement, and outperforms previous ones. / Yu, Xiaofeng. / Adviser: Zam Wai. / Source: Dissertation Abstracts International, Volume: 72-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 109-123). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
74

Markov random fields based image and video processing. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2010 (has links)
In this dissertation, we propose three methods to solve the problems of interactive image segmentation, video completion, and image denoising, which are all formulated as MRF-based energy minimization problems. In our algorithms, different MRF-based energy functions with particular techniques according to the characteristics of different tasks are designed to well fit the problems. With the energy functions, different optimization schemes are proposed to find the optimal results in these applications. In interactive image segmentation, an iterative optimization based framework is proposed, where in each iteration an MRF-based energy function incorporating an estimated initial probabilistic map of the image is optimized with a relaxed global optimal solution. In video completion, a well-defined MRF energy function involving both spatial and temporal coherence relationship is constructed based on the local motions calculated in the first step of the algorithm. A hierarchical belief propagation optimization scheme is proposed to efficiently solve the problem. In image denoising, label relaxation based optimization on a Gaussian MRF energy is used to achieve the global optimal closed form solution. / Many problems in computer vision involve assigning each pixel a label, which represents some spatially varying quantity such as image intensity in image denoising or object index label in image segmentation. In general, such quantities in image processing tend to be spatially piecewise smooth, since they vary smoothly in the object surface and change dramatically at object boundaries, while in video processing, additional temporal smoothness is satisfied as the corresponding pixels in different frames should have similar labels. Markov random field (MRF) models provide a robust and unified framework for many image and video applications. The framework can be elegantly expressed as an MRF-based energy minimization problem, where two penalty terms are defined with different forms. Many approaches have been proposed to solve the MRF-based energy optimization problem, such as simulated annealing, iterated conditional modes, graph cuts, and belief propagation. / Promising results obtained by the proposed algorithms, with both quantitative and qualitative comparisons to the state-of-the-art methods, demonstrate the effectiveness of our algorithms in these image and video processing applications. / Liu, Ming. / Adviser: Xiaoou Tang. / Source: Dissertation Abstracts International, Volume: 72-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 79-89). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
75

Um estudo sobre estimação e predição em modelos geoestatísticos bivariados / A study on estimation and prediction in bivariate geostatistical models

Bruno Henrique Fernandes Fonseca 05 March 2009 (has links)
Os modelos geoestatísticos bivariados denem funções aleatórias para dois processos estocásticos com localizações espaciais conhecidas. Pode-se adotar a suposição da existência de um campo aleatório gaussiano latente para cada variável aleatória. A suposição de gaussianidade do processo latente é conveniente para inferências sobre parâmetros do modelo e para obtenção de predições espaciais, uma vez que a distribuição de probabilidade conjunta para um conjunto de pontos do processo latente é também gaussiana. A matriz de covariância dessa distribuição deve ser positiva denida e possuir a estrutura de variabilidade espacial entre e dentre os atributos. Gelfand et al. (2004) e Diggle e Ribeiro Jr. (2007) propuseram estratégias para estruturar essa matriz, porém não existem muitos relatos sobre o uso e avaliações comparativas entre essas abordagens. Neste trabalho foi conduzido um estudo de simulação de modelos geoestatísticos bivariados em conjunto com estimação por máxima verossimilhança e krigagem ordinária, sob diferentes congurações amostrais de localizações espaciais. Também foram utilizados dados provenientes da análise de solo de uma propriedade agrícola com 51,8ha de área, onde foram amostradas 67 localizações georeferenciadas. Foram utilizados os valores mensurados de pH e da saturação por bases do solo, que foram submetidas à análise descritiva espacial, modelagens geoestatísticas univariadas, bivariadas e predições espaciais. Para vericar vantagens quanto à adoção de modelos univariados ou bivariados, a amostra da saturação por bases, que possui coleta mais dispendiosa, foi dividida em uma subamostra de modelagem e uma subamostra de controle. A primeira foi utilizada para fazer a modelagem geoestatística e a segunda foi utilizada para comparar as precisões das predições espaciais nas localizações omitidas no processo de modelagem. / Bivariate geostatistical models dene random functions for two stochastic processes with known spatial locations. Existence of a Gaussian random elds can be assumed for each latent random variable. This Gaussianity assumption for the latent process is a convenient one for the inferences on the model parameters and for spatial predictions once the joint distribution for a set of points is multivariate normal. The covariance matrix of this distribution should be positivede nite and to have the spatial variability structure between and among the attributes. Gelfand et al. (2004) and Diggle e Ribeiro Jr. (2007) suggested strategies for structuring this matrix, however there are few reports on comparing approaches. This work reports on a simulation study of bivariate models together with maximum likelihood estimators and spatial prediction under dierent sets of sampling locations space. Soil sample data from a eld with 51.8 hectares is also analyzed with the two soil attributes observed at 67 spatial locations. Data on pH and base saturation were submitted to spatial descriptive analysis, univariate and bivariate modeling and spatial prediction. To check for advantages of the adoption of univariate or bivariate models, the sample of the more expensive variable was divided into a modeling and testing subsamples. The rst was used to t geostatistical models, and the second was used to compare the spatial prediction precisions in the locations not used in the modeling process.
76

Issues in Bayesian Gaussian Markov random field models with application to intersensor calibration

Liang, Dong 01 December 2009 (has links)
A long term record of the earth's vegetation is important in studies of global climate change. Over the last three decades, multiple data sets on vegetation have been collected using different satellite-based sensors. There is a need for methods that combine these data into a long term earth system data record. The Advanced Very High Resolution Radiometer (AVHRR) has provided reflectance measures of the entire earth since 1978. Physical and statistical models have been used to improve the consistency and reliability of this record. The Moderated Resolution Imaging Spectroradiometer (MODIS) has provided measurements with superior radiometric properties and geolocation accuracy. However, this record is available only since 2000. In this thesis, we perform statistical calibration of AVHRR to MODIS. We aim to: (1) fill in gaps in the ongoing MODIS record; (2) extend MODIS values back to 1982. We propose Bayesian mixed models to predict MODIS values using snow cover and AVHRR values as covariates. Random effects are used to account for spatiotemporal correlation in the data. We estimate the parameters based on the data after 2000, using Markov chain Monte Carlo methods. We then back-predict MODIS data between 1978 and 1999, using the posterior samples of the parameter estimates. We develop new Conditional Autoregressive (CAR) models for seasonal data. We also develop new sampling methods for CAR models. Our approach enables filling in gaps in the MODIS record and back-predicting these values to construct a consistent historical record. The Bayesian framework incorporates multiple sources of variation in estimating the accuracy of the obtained data. The approach is illustrated using vegetation data over a region in Minnesota.
77

Optimal Bayesian Estimators for Image Segmentation and Surface Reconstruction

Marroquin, Jose L. 01 April 1985 (has links)
sA very fruitful approach to the solution of image segmentation andssurface reconstruction tasks is their formulation as estimationsproblems via the use of Markov random field models and Bayes theory.sHowever, the Maximuma Posteriori (MAP) estimate, which is the one mostsfrequently used, is suboptimal in these cases. We show that forssegmentation problems the optimal Bayesian estimator is the maximizersof the posterior marginals, while for reconstruction tasks, thesthreshold posterior mean has the best possible performance. We presentsefficient distributed algorithms for approximating these estimates insthe general case. Based on these results, we develop a maximumslikelihood that leads to a parameter-free distributed algorithm forsrestoring piecewise constant images. To illustrate these ideas, thesreconstruction of binary patterns is discussed in detail.
78

Parallel and Deterministic Algorithms for MRFs: Surface Reconstruction and Integration

Geiger, Davi, Girosi, Federico 01 May 1989 (has links)
In recent years many researchers have investigated the use of Markov random fields (MRFs) for computer vision. The computational complexity of the implementation has been a drawback of MRFs. In this paper we derive deterministic approximations to MRFs models. All the theoretical results are obtained in the framework of the mean field theory from statistical mechanics. Because we use MRFs models the mean field equations lead to parallel and iterative algorithms. One of the considered models for image reconstruction is shown to give in a natural way the graduate non-convexity algorithm proposed by Blake and Zisserman.
79

Probabilistic Solution of Inverse Problems

Marroquin, Jose Luis 01 September 1985 (has links)
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
80

Automated Building Detection From Satellite Images By Using Shadow Information As An Object Invariant

Baris, Yuksel 01 October 2012 (has links) (PDF)
Apart from classical pattern recognition techniques applied for automated building detection in satellite images, a robust building detection methodology is proposed, where self-supervision data can be automatically extracted from the image by using shadow and its direction as an invariant for building object. In this methodology / first the vegetation, water and shadow regions are detected from a given satellite image and local directional fuzzy landscapes representing the existence of building are generated from the shadow regions using the direction of illumination obtained from image metadata. For each landscape, foreground (building) and background pixels are automatically determined and a bipartitioning is obtained using a graph-based algorithm, Grabcut. Finally, local results are merged to obtain the final building detection result. Considering performance evaluation results, this approach can be seen as a proof of concept that the shadow is an invariant for a building object and promising detection results can be obtained when even a single invariant for an object is used.

Page generated in 0.391 seconds