• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 117
  • 102
  • 48
  • 33
  • 25
  • 14
  • 13
  • 13
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 974
  • 134
  • 120
  • 110
  • 99
  • 86
  • 82
  • 72
  • 71
  • 71
  • 70
  • 70
  • 70
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

ALTIMETRIA COM TOPOGRAFIA CONVENCIONAL E SENSORIAMENTO REMOTO / ALTIMETRY WITH CONVENTIONAL SURVEYING AND REMOTE SENSING

Pinto, Leandro de Mello 17 April 2012 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The altimetry of the surface terrain for many decades has been achieved almost exclusively by conventional surveying. The advancement of technology allowed the development of space missions and the creation of artificial satellites, making the science of remote sensing to expand exponentially. The SRTM (Shuttle Radar Topographic Mission) and ASTER GDEM (Advanced Spaceborne Thermal Emission and Reflection Radiometer) are spatial programs that provide altitude information of almost the entire globe. The program Google Earth uses that information, and provides for its users in a practical and rapid way. For ease of access to data from these techniques, many users use them without knowing the geometric problems existing in these products, which can compromise the quality of results obtained through these techniques. Therefore, there is a need for a prior evaluation to assess the quality and workability for each method. In this context, the objective was to analyze the accuracy of three ways of obtaining altitude: by SRTM, ASTER and Google Earth, comparing them with conventional surveying and with GPS, because they are more established techniques. To this end, two surveys were performed in situ, one using a GPS receiver and the other by conventional topography, where the heights of the points were compared with the heights obtained by the three methods analyzed, resulting in discrepancies. The results show that the data from the SRTM provided by Embrapa Monitoring by Satellites through the Project Brasil em Relevo are more accurate than ASTER data and Google Earth, moreover, was also found that the Kriging interpolation technique has best results for altimetric spatial data. / A altimetria do terreno, por muitas décadas, tem sido obtida, quase que exclusivamente por meio da topografia convencional. O avanço da tecnologia permitiu o desenvolvimento de missões espaciais e a criação de satélites artificiais, fazendo com que a ciência do Sensoriamento Remoto se expandisse de forma exponencial. O SRTM (Shuttle Radar Topographic Mission) e o ASTER GDEM (Advanced Spaceborne Thermal Emission and Reflection Radiometer) são programas espaciais que fornecem informações altimétricas de quase todo o globo terrestre. O programa Google Earth utiliza-se dessas informações e as disponibiliza para seus usuários de forma prática e rápida. Pela facilidade de acesso aos dados provenientes destas técnicas, muitos usuários os utilizam sem conhecer os problemas geométricos existentes nesses produtos, o que pode comprometer a qualidade dos resultados obtidos através dessas técnicas. Por esse motivo, há a necessidade de uma prévia avaliação para aferir a qualidade e a aplicabilidade de cada método. Nesse contexto, o objetivo do trabalho foi analisar a acurácia de três formas de obtenção de altitude: por SRTM, ASTER e Google Earth, comparando-os com a topografia convencional e com o GPS, por serem técnicas mais consolidadas. Para isso, foram realizados dois levantamentos in situ, um através de receptores GPS e outro por topografia convencional, onde as altitudes dos pontos foram comparadas com as altitudes obtidas pelos três métodos analisados, resultando nas discrepâncias. Os resultados mostram que os dados provenientes do SRTM, fornecidos pela Embrapa Monitoramento por Satélite, através do Projeto Brasil em Relevo, são mais acurados do que os dados ASTER e Google Earth, além disto, também foi constatado que a técnica de interpolação da Krigagem apresenta melhores resultados para a espacialização de dados altimétricos.
462

DESENVOLVIMENTO DE UM SISTEMA WEB PARA ESTIMATIVA NUMÉRICA DE DADOS METEOROLÓGICOS DO RIO GRANDE DO SUL / DEVELOPMENT OF A WEB SYSTEM FOR NUMERICAL ESTIMATE OF METEOROLOGICAL DATA FROM RIO GRANDE DO SUL

Ferraz, Rafael Camargo 19 January 2010 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Climate influences a large variety of human activities and the real-time access to climatic data aims for providing information that are fundamental in several human activities, mainly for agriculture. Nowadays there are few weather stations operating, which causes a lack of information at worldwide level, in different regions. Keeping in mind that the state of Rio Grande do Sul has a large part of its economy based on agriculture, as well as the climatological information's relevance, this study aims at developing a web system for numerical estimate of meteorological data in the RS's state, based on the automatical and superficial weather stations from the National Institute of Meteorology (INMET), in order to make the data available to the regions where there are no stations. The inverse distance weighting was used as an interpolation model, applying the exponents from zero (0) to five (5) and, later, comparing them through the coefficients of correlation, regression and the performance index. It was made use of programming languages as PHP, HTML and JavaScript to develop the web system, with support for MySQL database. The programs used were Macromedia Dreamweaver 8.0 and HeidSQL; the first was used for web programming whereas the second was used to manage the database. Among the nine variables that was analysed, just four of them showed a great performance. They are: temperature, relative humidity, atmospheric pressure and dew points. The interpolation model with the exponent five (5) has shown the best performance regarding to the four variables. After defining the best method, it was created a MySQL database called SWIM (Meteorological Interpolation's Web System) and through it the web system was developed, which has offered quickness, security and reliability to the application. / O clima influencia as mais diversas atividades humanas e o acesso aos dados climatológicos em tempo real, visa o fornecimento de informações que são fundamentais, principalmente para a agricultura. Atualmente existem poucas estações meteorológicas instaladas o que gera carência de informações, em âmbito mundial, para diversas regiões. Tendo em vista que o Estado do Rio Grande do Sul possui grande parte de sua economia baseada na agricultura e também a relevância das informações climatológicas, o presente trabalho teve como objetivo desenvolver um sistema web de estimativa numérica de dados meteorológicos para o Estado, com base nas estações meteorológicas automáticas de superfície, do Instituto Nacional de Meteorologia (INMET), com o intuito de dispor os dados para as regiões as quais não possuem estações. Como modelo interpolador foi utilizado o inverso da potência da distância, aplicando os expoentes de 0 a 5 e, posteriormente comparando-os através dos coeficientes de correlação, regressão e índice de desempenho. Para a realização do sistema web foram utilizadas as linguagens de programação PHP, HTML e javascript, com suporte ao banco de dados MYSQL. Utilizou-se os programas Macromedia Dreamweaver 8.0 para a programação web e HeidiSQL para gerenciar o banco de dados. Dentre as nove variáveis analisadas, apenas quatro apresentaram ótimo desempenho, sendo elas: temperatura, umidade relativa do ar, pressão atmosférica e ponto de orvalhos. O modelo de interpolação com expoente 5 foi o que apresentou melhor desempenho para as quatro variáveis. Após definição do melhor método, criou-se o banco de dados SWIM (Sistema Web de Interpolação Meteorológica) em MySQL e desenvolveu-se o sistema web, o qual ofereceu rapidez, segurança e confiabilidade para a aplicação.
463

Hur bör buller vid befintlig bullervall beräknas? : En GIS-studie i nordvästra Sätra, Gävle

Svensson, Agneta January 2017 (has links)
Buller är ofta ett problem i samhället på grund av den snabba urbaniseringen. Det är något som måste tas hänsyn till vid ny- och ombyggnation men även generellt i samhället. Det byggs mycket och med en ökande befolkningsmängd och förtätning av stadsregioner blir det mer trafik för varje dag vilket leder till att det blir ett bullrigt samhälle. Vid konstant buller över 45 dBA utvecklas stressfaktorer vilket i längden ger hjärt- och kärlproblem. Bullerberäkningar görs idag av kommunen och Trafikverket med hjälp av Nordiska beräkningsmodellen från 1996. I den här studien undersöks hur buller kan beräknas vid befintlig bullervall. Bullervallen är belägen intill E4:an vid Gävle och på andra sidan bullervallen ligger ett bostadsområde. Enligt de boendes upplevelse har bullret från E4:an ökat sedan färdigställandet av bostadsområdet på 1980-talet. Syftet med studien är att öka förståelsen för hur bullermätningarna kan beräknas i geografiska informationssystem (GIS) utifrån mätningar av trafikbuller vid befintlig bullervall i studieområdet bostadsrättsföreningen Gävle 36 vid E4:an i Gävle. I den här studien undersöks om interpolation av bullermätningar i GIS får med alla nödvändiga fakta och ger lika säkra resultat som Nordisk beräkningsmodell. Bullermätningar utfördes med en ljudnivåmätning på 34 platser. Datat bearbetades i GIS med olika interpolationsmetoder. Det gjordes även en enkätundersökning bland invånarna i bostadsområdet för att se hur väl datat överenstämmer med invånarnas åsikter. En jämförelse av bullermätningarna och enkätundersökningen visade att invånarnas upplevelser av bullernivån överensstämde med bullermätningarna. Resultatet visar att bullermätning vid en befintlig bullervall tillsammans med interpolation i GIS är ett verktyg som fungerar väl. Det går att se hur bullret fördelar sig över ett område med hjälp av kartor och diagram och det går att utläsa bullret på en specifik plats. För att säkerhetställa svaret gjordes även undersökning med hjälp av restvärden, uppskattade värden och spridningsvvikelser från interpolationerna. Rekommendationen utifrån denna studie är att använda interpolationsmetoden kriging vid bullerberäkning.Av de undersökta metoderna var spridningsavvikelsen minst på kriging och har bland det lägsta i restvärdet. Något som studien dessutom visar är att vindens riktning har stor betydelse för hur mycket det bullrar i ett området, vilket inte Nordisk beräkningsmodell tar i beaktning vid bullerberäkning.
464

Ajuste de curvas por polinômios com foco no currículo do ensino médio / Curve fitting polynomials focusing on high school curriculum

Santos, Alessandro Silva, 1973- 27 August 2018 (has links)
Orientador: Lúcio Tunes dos Santos / Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-27T11:38:45Z (GMT). No. of bitstreams: 1 Santos_AlessandroSilva_M.pdf: 6474871 bytes, checksum: 351d93b093e44b399a99cd42075cb4b5 (MD5) Previous issue date: 2015 / Resumo: ste trabalho surge a partir de uma proposta de desenvolvimento do ajuste de curvas com uma abordagem que busca enriquecer o estudo de funções presente no currículo do ensino fundamental e médio. É apresentada ao aluno, desde o aspecto histórico do ajuste, passando pela interpolação de curvas, com foco na interpolação polinomial e o método dos quadrados mínimos, no qual são apresentados, a regressão linear, além de modelos como o ajuste exponencial. É também descrita nesta dissertação uma ferramenta de grande importância no cálculo numérico, o conceito de erro de ajuste, sua definição e forma de estimativa. Na interpolação polinomial, o aluno, ao desenvolver a forma de Lagrange, é estimulado a trabalhar as operações, forma fatorada e interpretação das vantagens do método, como o número e grau de dificuldade de operações realizadas por esse. Interpolação inversa e interpolação de curvas complementam o referido capítulo, em que busca, sempre que possível, utilizar situações problema . O método dos quadrados mínimos estimula o estudante a escolha da função de ajuste e determinação dessa a partir do conceito de minimização do erro. Polinômios de grau um,a regressão linear, e dois são trabalhados devido a sua importância no currículo do ensino médio. Explorando também conceitos como logaritmos e exponenciais, propõe-se o ajuste linear de modelos exponenciais, utilizando situações problema de uma área em evidência no momento, a Biomatemática, modelar dinâmicas de crescimento populacional. Dessa forma o aluno tem contato com formas de previsão que são úteis em importantes áreas como: a saúde pública, a modelagem de epidemias e desenvolvimento de patógenos; planejamento de políticas públicas com a modelagem do crescimento e distribuição da população; comportamento da economia como no caso de previsões de juros futuros. Para que este trabalho possa servir de auxílio aos professores de forma prática e interessante, o capítulo final traz sugestão de problemas na forma de planilha, facilitando sua reprodução e aplicação em sala de aula / Abstract: This study comes from a development proposal curves adjustment with an approach that seeks to enrich the study of present functions in the primary and secondary curriculum. It is presented to the student, from the historical aspect setting, through interpolation curves, focusing on polynomial interpolation and the method of least squares, which presents the linear regression, and models like the exponential fit. It is also described in this work a very important tool in numerical calculation, the concept of setting error, its definition and method of estimation. In polynomial interpolation, the student, to develop the form of Lagrange, is encouraged to work operations, factored form and interpretation of the advantages of the method, as the number and degree of difficulty of tasks for this. Inverse interpolation and interpolation curves complement the chapter on seeking, whenever possible, use problem situations. The method of least squares stimulates the student the choice of setting function and determine this from the concept of minimizing the error. Polynomials of degree one, linear regression, and two are worked because of its importance in the high school curriculum. Also exploring concepts such as logarithms and exponential, it is proposed that the linear fit of exponential models using problem situations evidence in area at the time, the Biomathematics, modeling dynamics of population growth. Thus the student has contact with forms of provision that are useful in important areas such as public health, the modeling of epidemics and development of pathogens; public policy planning with modeling of growth and distribution of population; behavior of the economy as in the case of future interest rate forecasts. For this work may be an aid to teachers in a practical and interesting, the final chapter brings problems suggestion in the form of sheet, facilitating its reproduction and use in the classroom / Mestrado / Matemática em Rede Nacional - PROFMAT / Mestre em Matemática em Rede Nacional - PROFMAT
465

An analytic approach to tensor scale with efficient computational solution and applications to medical imaging

Xu, Ziyue 01 May 2012 (has links)
Scale is a widely used notion in medical image analysis that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, a notion of local morphometric scale referred to as "tensor scale" was introduced using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, with previous framework, 3-D application is not practical due to computational complexity. The overall aim of the Ph.D. research is to establish an analytic definition of tensor scale in n-dimensional (n-D) images, to develop an efficient computational solution for 2- and 3-D images and to investigate its role in various medical imaging applications including image interpolation, filtering, and segmentation. Firstly, an analytic definition of tensor scale for n-D images consisting of objects formed by pseudo-Riemannian partitioning manifolds has been formulated. Tensor scale captures contextual structural information which is useful in local structure-adaptive anisotropic parameter control and local structure description for object/image matching. Therefore, it is helpful in a wide range of medical imaging algorithms and applications. Secondly, an efficient computational solution of tensor scale for 2- and 3-D images has been developed. The algorithm has combined Euclidean distance transform and several novel differential geometric approaches. The accuracy of the algorithm has been verified on both geometric phantoms and real images compared to the theoretical results generated using brute-force method. Also, a matrix representation has been derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Thirdly, an inter-slice interpolation algorithm using 2-D tensor scale information of adjacent slices has been developed to determine the interpolation line at each image location in a gray level image. Experimental results have established the superiority of the tensor scale based interpolation method as compared to existing interpolation algorithms. Fourthly, an anisotropic diffusion filtering algorithm based on tensor scale has been developed. The method made use of tensor scale to design the conductance function for diffusion process so that along structure diffusion is encouraged and boundary sharpness is preserved. The performance has been tested on phantoms and medical images at various noise levels and the results were quantitatively compared with conventional gradient and structure tensor based algorithms. The experimental results formed are quite encouraging. Also, a tensor scale based n-linear interpolation method has been developed where the weights of neighbors were locally tuned based on local structure size and orientation. The method has been applied on several phantom and real images and the performance has been evaluated in comparison with standard linear interpolation and windowed Sinc interpolation methods. Experimental results have shown that the method helps to generate more precise structure boundaries without causing ringing artifacts. Finally, a new anisotropic constrained region growing method locally controlled by tensor scale has been developed for vessel segmentation that encourages axial region growing while arresting cross-structure leaking. The method has been successfully applied on several non-contrast pulmonary CT images. The accuracy of the new method has been evaluated using manually selection and the results found are very promising.
466

Pick interpolation, displacement equations, and W*-correspondences

Norton, Rachael M. 01 May 2017 (has links)
The classical Nevanlinna-Pick interpolation theorem, proved in 1915 by Pick and in 1919 by Nevanlinna, gives a condition for when there exists an interpolating function in H∞(D) for a specified set of data in the complex plane. In 1967, Sarason proved his commutant lifting theorem for H∞(D), from which an operator theoretic proof of the classical Nevanlinna-Pick theorem followed. Several competing noncommutative generalizations arose as a consequence of Sarason's result, and two strategies emerged for proving generalized Nevanlinna-Pick theorems: via a commutant lifting theorem or via a resolvent, or displacement, equation. We explore the difference between these two approaches. Specifically, we compare two theorems: one by Constantinescu-Johnson from 2003 and one by Muhly-Solel from 2004. Muhly-Solel's theorem is stated in the highly general context of W*-correspondences and is proved via commutant lifting. Constantinescu-Johnson's theorem, while stated in a less general context, has the advantage of an elegant proof via a displacement equation. In order to make the comparison, we first generalize Constantinescu-Johnson's theorem to the setting of W*-correspondences in Theorem 3.0.1. Our proof, modeled after Constantinescu-Johnson's, hinges on a modified version of their displacement equation. Then we show that Theorem 3.0.1 is fundamentally different from Muhly-Solel's. More specifically, interpolation in the sense of Muhly-Solel's theorem implies interpolation in the sense of Theorem 3.0.1, but the converse is not true. Nevertheless, we identify a commutativity assumption under which the two theorems yield the same result. In addition to the two main theorems, we include smaller results that clarify the connections between the notation, space of interpolating maps, and point evaluation employed by Constantinescu-Johnson and those employed by Muhly-Solel. We conclude with an investigation of the relationship between Theorem 3.0.1 and Popescu's generalized Nevanlinna-Pick theorem proved in 2003.
467

Quelques contributions vers la simulation parallèle de la cinétique neutronique et la prise en compte de données observées en temps réel / Some contributions towards the parallel simulation of time dependent neutron transport and the integration of observed data in real time

Mula Hernandez, Olga 24 September 2014 (has links)
Dans cette thèse nous avons tout d'abord développé un solveur neutronique de cinétique transport 3D en géométrie déstructurée avec une discrétisation spatiale par éléments finis discontinus (solveur MINARET). L'écriture d'un tel code représente en soi une contribution importante dans la physique des réacteurs car il permettra de connaître de façon très précise l'état du c¿ur au cours d'accidents graves. Il jouera aussi un rôle important pour des études de fluence de la cuve des réacteurs. D'un point de vue mathématique, l'apport le plus important a consisté en l'implémentation d'algorithmes adaptés aux architectures de calcul parallèle, permettant de réduire de façon significative les temps de calcul. Un effort particulier a été mené pour paralléliser de façon efficace la variable temporelle par l'algorithme pararéel en temps. Nous avons ensuite cherché à développer une méthode qui permettrait d'utiliser MINARET comme outil de surveillance pendant l'opération d'un réacteur nucléaire. Une des difficultés majeures de ce problème réside dans le besoin de fournir les simulations en temps réel. La question a été abordée en développant tout d'abord une généralisation de la méthode Empirical Interpolation (EIM) grâce à laquelle on a pu définir un processus d'interpolation bien posé pour des fonctions appartenant à des espaces de Banach. Ceci est rendu possible par l'utilisation de formes linéaires d'interpolation au lieu des traditionnels points d'interpolation et une partie de cette thèse a été consacrée à la compréhension des propriétés théoriques de cette méthode (analyse de convergence sous hypothèse d'ensemble de petite dimension de Kolmogorov et étude de sa stabilité). / In this thesis, we have first developed a time dependent 3D neutron transport solver on unstructured meshes with discontinuous Galerkin finite elements spatial discretization. The solver (called MINARET) represents in itself an important contribution in reactor physics thanks to the accuracy that it can provide in the knowledge of the state of the core during severe accidents. It will also play an important role on vessel fluence calculations. From a mathematical point of view, the most important contribution has consisted in the implementation of algorithms that are well adapted for modern parallel architectures and that significantly decrease the computing times. A special effort has been done in order to efficiently parallelize the time variable by the use of the parareal in time algorithm. On a second stage, we have developed the foundations of a method with which we could use MINARET to monitor in real time the population of neutrons during the operation of the reactor. One of the major difficulties relies in the necessity of providing computations in real time. This question has been addressed by proposing an extension of the Empirical Interpolation Method (EIM) thanks to which a well-posed interpolation procedure has been defined for functions belonging to Banach spaces. This is possible thanks to the use of interpolating linear forms instead of the traditional interpolation points and a part of this thesis has been devoted to the understanding of the theoretical properties of this method (convergence analysis under the hypothesis of small Kolmogorov n-width and stability of the procedure).
468

Efficient and robust partitioned solution schemes for fluid-structure interactions

Bogaers, Alfred Edward Jules January 2015 (has links)
Includes bibliographical references / In this thesis, the development of a strongly coupled, partitioned fluid-structure interactions (FSI) solver is outlined. Well established methods are analysed and new methods are proposed to provide robust, accurate and efficient FSI solutions. All the methods introduced and analysed are primarily geared towards the solution of incompressible, transient FSI problems, which facilitate the use of black-box sub-domain field solvers. In the first part of the thesis, radial basis function (RBF) interpolation is introduced for interface information transfer. RBF interpolation requires no grid connectivity information, and therefore presents an elegant means by which to transfer information across a non-matching and non-conforming interface to couple finite element to finite volume based discretisation schemes. The transfer scheme is analysed, with particular emphasis on a comparison between consistent and conservative formulations. The primary aim is to demonstrate that the widely used conservative formulation is a zero order method. Furthermore, while the consistent formulation is not provably conservative, it yields errors well within acceptable levels and converges within the limit of mesh refinement. A newly developed multi-vector update quasi-Newton (MVQN) method for implicit coupling of black-box partitioned solvers is proposed. The new coupling scheme, under certain conditions, can be demonstrated to provide near Newton-like convergence behaviour. The superior convergence properties and robust nature of the MVQN method are shown in comparison to other well-known quasi-Newton coupling schemes, including the least squares reduced order modelling (IBQN-LS) scheme, the classical rank-1 update Broyden's method, and fixed point iterations with dynamic relaxation. Partitioned, incompressible FSI, based on Dirichlet-Neumann domain decomposition solution schemes, cannot be applied to problems where the fluid domain is fully enclosed. A simple example often provided in the literature is that of balloon inflation with a prescribed inflow velocity. In this context, artificial compressibility (AC) will be shown to be a useful method to relax the incompressibility constraint, by including a source term within the fluid continuity equation. The attractiveness of AC stems from the fact that this source term can readily be added to almost any fluid field solver, including most commercial solvers. AC/FSI is however limited in the range of problems it can effectively be applied to. To this end, the combination of the newly developed MVQN method with AC/FSI is proposed. In so doing, the AC modified fluid field solver can continue to be treated as a black-box solver, while the overall robustness and performance are significantly improved. The study concludes with a demonstration of the modularity offered by partitioned FSI solvers. The analysis of the coupled environment is extended to include steady state FSI, FSI with free surfaces and an FSI problem with solid-body contact.
469

ACCURATE APPROXIMATION OF UNSTRUCTURED GRID INTO REGULAR GRID WITH COMPLEX BOUNDARY HANDLING

Dana M El Rushaidat (11777354) 03 December 2021 (has links)
<p>Computational Fluid Dynamic (CFD) simulations often produce datasets defined over unstructured grids with solid boundaries. Though unstructured grids allow for the flexible representation of this geometry and the refinement of the grid resolution, they suffer from high storage cost, non-trivial spatial queries, and low reconstruction smoothness. Rectilinear grids, in contrast, have a negligible memory footprint and readily support smooth data reconstruction, though with reduced geometric flexibility.</p><p>This thesis is concerned with the creation of accurate reconstruction of large unstructured datasets on rectilinear grids with the capability of representing complex boundaries. We present an efficient method to automatically determine the geometry of a rectilinear grid upon which a low-error data reconstruction can be achieved with a given reconstruction kernel. Using this rectilinear grid, we address the potential ill-posedness of the data fitting problem, as well as the necessary balance between smoothness and accuracy, through a bi-level smoothness regularization. To tackle the computational challenge posed by very large input datasets and high-resolution reconstructions, we propose a block-based approach that allows us to obtain a seamless global approximation solution from a set of independently computed sparse least-squares problems. </p><p></p><p>We endow the approximated rectilinear grid with boundary handling capabilities that allows for accommodating challenging boundaries while supporting high-order reconstruction kernels. Results are presented for several 3D datasets that demonstrate the quality of the visualization results that our reconstruction enables, at a greatly reduced computational and memory cost. Our data representation enjoys all the benefits of conventional rectilinear grids while addressing their fundamental geometric limitations.</p>
470

Podobnost obrazu na základě barvy / Image similarity based on colour

Hampl, Filip January 2015 (has links)
This diploma thesis deals with image similarity based on colour. There are discussed necessary theoretical basis for better understanding of this topic. These basis are color models, that are implemented in work, principle of creating the histogram and its comparing. Next chapter deals with summary of recent progress in the field of image comparison and overview of several most used methods. Practical part introduces training image database, which gives results of success for each created method. These methods are separately described, including their principles and achieved results. In the very end of this work, user interface is described. This interface provides a transparent presentation of the results for the chosen method.

Page generated in 0.0215 seconds