• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 201
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 463
  • 63
  • 56
  • 56
  • 55
  • 48
  • 44
  • 43
  • 41
  • 40
  • 37
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Sezónní stavové modelování / Seasonal state space modeling

Suk, Luboš January 2014 (has links)
State space modeling represents a statistical framework for exponential smoo- thing methods and it is often used in time series modeling. This thesis descri- bes seasonal innovations state space models and focuses on recently suggested TBATS model. This model includes Box-Cox transformation, ARMA model for residuals and trigonometric representation of seasonality and it was designed to handle a broad spectrum of time series with complex types of seasonality inclu- ding multiple seasonality, high frequency of data, non-integer periods of seasonal components, and dual-calendar effects. The estimation of the parameters based on maximum likelihood and trigonometric representation of seasonality greatly reduce computational burden in this model. The universatility of TBATS model is demonstrated by four real data time series.
212

Approximating true relevance model in relevance feedback

Zhang, Peng January 2013 (has links)
Relevance is an essential concept in information retrieval (IR) and relevance estimation is a fundamental IR task. It involves not only document relevance estimation, but also estimation of user's information need. Relevance-based language model aims to estimate a relevance model (i.e., a relevant query term distribution) from relevance feedback documents. The true relevance model should be generated from truly relevant documents. The ideal estimation of the true relevance model is expected to be not only effective in terms of mean retrieval performance (e.g., Mean Average Precision) over all the queries, but also stable in the sense that the performance is stable across different individual queries. In practice, however, in approximating/estimating the true relevance model, the improvement of retrieval effectiveness often sacrifices the retrieval stability, and vice versa. In this thesis, we propose to explore and analyze such effectiveness-stability tradeoff from a new perspective, i.e., the bias-variance tradeoff that is a fundamental theory in statistical estimation. We first formulate the bias, variance and the trade-off between them for retrieval performance as well as for query model estimation. We then analytically and empirically study a number of factors (e.g., query model complexity, query model combination, document weight smoothness and irrelevant documents removal) that can affect the bias and variance. Our study shows that the proposed bias-variance trade-off analysis can serve as an analytical framework for query model estimation. We then investigate in depth on two particular key factors: document weight smoothness and removal of irrelevant documents, in query model estimation, by proposing novel methods for document weight smoothing and irrelevance distribution separation, respectively. Systematic experimental evaluation on TREC collections shows that the proposed methods can improve both retrieval effectiveness and retrieval stability of query model estimation. In addition to the above main contributions, we also carry out initial exploration on two further directions: the formulation of bias-variance in personalization and looking at the query model estimation via a novel theoretical angle (i.e., Quantum theory) that has partially inspired our research.
213

Implementation of Separable & Steerable Gaussian Smoothers on an FPGA

Joginipelly, Arjun 17 December 2010 (has links)
Smoothing filters have been extensively used for noise removal and image restoration. Directional filters are widely used in computer vision and image processing tasks such as motion analysis, edge detection, line parameter estimation and texture analysis. It is practically impossible to tune the filters to all possible positions and orientations in real time due to huge computation requirement. The efficient way is to design a few basis filters, and express the output of a directional filter as a weighted sum of the basis filter outputs. Directional filters having these properties are called "Steerable Filters." This thesis work emphasis is on the implementation of proposed computationally efficient separable and steerable Gaussian smoothers on a Xilinx VirtexII Pro FPGA platform. FPGAs are Field Programmable Gate Arrays which consist of a collection of logic blocks including lookup tables, flip flops and some amount of Random Access Memory. All blocks are wired together using an array of interconnects. The proposed technique [2] is implemented on a FPGA hardware taking the advantage of parallelism and pipelining.
214

Bifurcações induzidas por limites no contexto de estabilidade de tensão / Limit induced bifurcations in the context of voltage stability

Abrantes, Adriano Lima 09 August 2016 (has links)
A interligação dos sistemas elétricos de potência (SEPs) resultou em um aumento na complexidade dos mesmos. Além disso, devido a pressões econômicas e ambientais, os sistemas têm operado mais próximos aos seus limites de transmissão, o que aumenta a relevância de análises de segurança no contexto de estabilidade de tensão. Neste cenário, problemas associados à capacidade de transmissão do SEP, como a bifurcação induzida por limite (BIL), tornam-se mais importantes, criando a necessidade de ferramentas de análise apropriadas. Um dos objetivos deste projeto de mestrado é estudar mais profundamente as BILs para que se possa tratar melhor o fenômeno no contexto de estabilidade de tensão. Outro objetivo é o desenvolvimento de técnicas para avaliação da margem de estabilidade de tensão (MET) considerando a possibilidade de ocorrência de BIL. Finalmente, dada a grande importância e ampla utilização da análise de sensibilidade da MET devido à bifurcação sela-nó (BSN) em estudos de estabilidade de tensão, o equacionamento do problema de análise de sensibilidade da MET devido à BIL é o terceiro objetivo deste trabalho. A análise de sensibilidade é importante pois, não só fornece mais informações sobre o fenômeno de instabilidade e seus mecanismos, mas também auxilia na análise de segurança de SEPs, fornecendo informações sobre quais ações de controle serão mais eficazes na manutenção da MET e quais contingências serão mais severas. Isto é, quais mudanças no sistema mais afetam a MET. No entanto, este tipo de análise só foi realizado para o caso em que a MET é determinada por um ponto de BSN, não para o caso da BIL. Com o intuito de possibilitar que ferramentas de seleção de controles preventivos tratem o fenômeno da BIL, realizou-se uma análise de sensibilidade do ponto de BIL similar à análise usual baseada na BSN. Outra contribuição deste trabalho é uma formulação suave de limites complementares que foi aplicada ao problema de limites de injeção de potência reativa de unidades geradoras. A formulação proposta transforma, pelo menos numericamente, a BIL em uma BSN que pode ser detectada através de métodos já consolidados na literatura. / The interconnection of electric power systems (EPSs) led to an increase in security assessment complexity. Besides, due to economical and environmental inuences, EPSs have been operating closer to their transmission limits, which raises the relevance of security assessment in the context of voltage stability. In this scenario, problems related to EPS power transmission capacity, such as the limit induced bifurcation (LIB), become more important and bring the need of appropriate analysis tools. One of the goals of this project is to study the LIB problem more deeply, so it can be better understood in the context of voltage stability. Another objective is the development of methods for evaluating the load margin (LM) considering the possible occurrence of LIBs. Finally, since the LM sensitivity analysis due to saddle-node bifurcation (SNB) plays a highly important role in voltage stability studies, developing a method for LM sensitivity analysis due to LIB is our third objective. The sensitivity analysis is important not only because it provides information on the instability phenomenon and its mechanisms, but it is also useful for EPS security assessment, since it may provide knowledge on which control actions will be more eective in increasing the LM and which contingencies may be more severe. However, this analysis has been performed only for the case in which the LM is determined by a SNB point, not for the LIB case. With the intention of enabling pre-existing preventive control selection tools to treat the LIB phenomenon, a sensitivity analysis was performed at the LIB point similarly to what was developed for the SNB. Another contribution of this work is a smoothing formulation for complementary limits that was applied to the problem of limited reactive power injection of generating units. The proposed formulation transforms, at least numerically, the LIB in a SNB, which may be detected through methods already established in literature.
215

Correção de normais para suavização de nuvens de pontos / Normal correction towards smoothing point-based surfaces

Valdivia, Paola Tatiana Llerena 08 November 2013 (has links)
Nos anos recentes, suavização de superfícies é um assunto de intensa pesquisa em processamento geométrico. Muitas das abordagens para suavização de malhas usam um esquema de duas etapas: filtragem de normais seguido de um passo de atualização de vértices para corresponder com as normais filtradas. Neste trabalho, propomos uma adaptação de tais esquemas de duas etapas para superfícies representadas por nuvens de pontos. Para isso, exploramos esquemas de pesos para filtrar as normais. Além disso, investigamos três métodos para estimar normais, analisando o impacto de cada método para estimar normais em todo o processo de suavização da superfície. Para uma análise quantitativa, além da comparação visual convencional, avaliamos a eficácia de diferentes opções de implementação usando duas medidas, comparando nossos resultados com métodos de suavização de nuvens de pontos encontrados a literatura / In the last years, surface denoising is a subject of intensive research in geometry processing. Most of the recent approaches for mesh denoising use a twostep scheme: normal filtering followed by a point updating step to match the corrected normals. In this work, we propose an adaptation of such two-step approaches for point-based surfaces, exploring three different weight schemes for filtering normals. Moreover, we also investigate three techniques for normal estimation, analyzing the impact of each normal estimation method in the whole point-set smoothing process. Towards a quantitative analysis, in addition to conventional visual comparison, we evaluate the effectiveness of different choices of implementation using two measures, comparing our results against state-of-art point-based denoising techniques. Keywords: surface smoothing; point-based surface; normal estimation; normal filtering.
216

A suavização do lucro líquido e a persistência das contas de resultado das empresas brasileiras de capital aberto / The net income smoothing and the persistence of the result accounts of Brazilian companies

Kajimoto, Clarice Gutierrez Kitamura 21 March 2017 (has links)
A literatura trata a suavização do lucro líquido como uma das proxies para medir a qualidade da informação contábil (DECHOW; GE; SCHRAND, 2010). Porém, pesquisas sobre suavização do lucro líquido são divergentes em responder se essa suavização aumenta ou diminui a qualidade da informação. Existem trabalhos que testam se o aumento da suavização do lucro líquido aumenta a qualidade da informação por meio da persistência do lucro (TUCKER; ZAROWIN, 2006). Sabe-se, todavia, que os investidores não projetam fluxos de caixa futuros das empresas utilizando somente o lucro líquido, mas as contas de resultado que compõem esse lucro, pois são consideradas relevantes no processo de decisão sobre determinado investimento (BARTON; HANSEN; POWNALL, 2010). Entretanto, desconhecese qual o impacto da suavização sobre as contas de resultado que compõem o lucro líquido. Assim, esta pesquisa procura analisar como o objetivo de suavizar o lucro líquido afeta a persistência das contas de resultado que compõem esse lucro. Nesse sentido, as empresas que fazem parte da amostra foram separadas em empresas que mais e menos suavizam o lucro líquido de acordo com três modelos de suavização encontrados na literatura (LEUZ; NANDA; WYSOCKI , 2003; TUCKER; ZAROWIN, 2006). Posteriormente, foram testadas a persistência das contas de resultado, utilizando o modelo de persistência adaptado de Dechow; Ge e Schrand (2010). Os resultados apontam que as empresas que mais suavizam o lucro líquido possuem contas de resultado mais persistentes em relação às contas das empresas que menos suavizam esse lucro. Além disso, as empresas que mais suavizam o lucro líquido com maior quantidade de accruals discricionários possuem determinadas contas de resultado mais persistentes quando comparadas às empresas que mais suavizam esse lucro com menor quantidade de accruals discricionários. Portanto, os resultados sugerem que o gestor esteja suavizando o lucro artificialmente aumentando a persistência de determinadas contas de resultado, o que caracteriza estas persistências como artificiais. Assim, o investidor que projetar fluxos de caixa de empresas que mais suavizam o lucro líquido com maior quantidade de accruals discricionários poderá ter sua decisão prejudicada / The literature treats the income smoothing as one of the proxies to measure the earnings quality (DECHOW; GE; SCHRAND, 2010). However, research on the income smoothing diverges in whether this smoothing increases or decreases the earnings quality. There are studies that test whether the increase in income smoothing increases the quality of information through the earnings persistence (TUCKER; ZAROWIN, 2006). It is known, however, that investors do not project future cash flows of companies using only net income, but the profit and loss accounts that make up this profit since they are considered relevant in the decision process on an investment (BARTON; HANSEN; POWNALL, 2010). However, the impact of income smoothing on the income statements that make up net income is not known. Thus, this research seeks to analyze how the objective of smoothing the net profit affects the persistence of the income accounts that compose this profit. In this sense, the companies that are part of the sample were separated into companies that more and less smooth the net profit according to three models of income smoothing found in the literature (LEUZ; NANDA; WYSOCKI , 2003; TUCKER; ZAROWIN, 2006). Subsequently, the persistence of the profit and loss accounts was tested using the persistence model adapted from Dechow; Ge and Schrand (2010). The results show that the companies that smoothed the net profit have more persistent profit and loss accounts in relation to the accounts of the companies that least smooth their profit. In addition, companies that the most smoothed their net income with greater amount of discretionary accruals have more persistent profit and loss accounts when compared to the companies that most smooth their profit with less amount of discretionary accruals. Therefore, the results suggest that it is possible for the manager being artificially smoothing the profit, making certain profit and loss accounts more persistent, which characterizes persistence as artificial. Thus, the investor who projects future cash flow from companies that the most smooth the net income with greater discretionary accruals may have their decision impaired, since the projection of future cash flow may not represent the expected future financial performance of the company
217

Essays on uncertainty, asset prices and monetary policy : a case of Korea

Yi, Paul January 2014 (has links)
In Korea, an inflation targeting (IT) regime was adopted in the aftermath of the Korean currency crisis of 1997–1998. At that time, the Bank of Korea (BOK) shifted the instrument of monetary policy from monetary aggregates to interest rates. Recently, central bank policymakers have confronted more uncertainties than ever before when deciding their policy interest rates. In this monetary policy environment, it is worth exploring whether the BOK has kept a conservative posture in moving the Korean call rate target, the equivalent of the US Federal Funds rate target since the implementation of an interest rate-oriented monetary policy. Together with this, the global financial crisis (GFC) of 2007–2009 provoked by the US sub-prime mortgage market recalls the following question: should central banks pre-emptively react to a sharp increase in asset prices? Historical episodes indicate that boom-bust cycles in asset prices, in particular, house prices, can be damaging to the economy. In Korea, house prices have been evolving under uncertainties, and in the process house-price bubbles have been formed. Therefore, in recent years, central bankers and academia in Korea have paid great attention to fluctuations in asset prices. In this context, the aims of this thesis are: (i) to set up theoretical and empirical models of monetary policy under uncertainty; (ii) to examine the effect of uncertainty on the operation of monetary policy since the adoption of interest rate-oriented policy; and (iii) to investigate whether gradual adjustment in policy rates can be explained by uncertainty in Korea. Another important aim is (iv) to examine whether house-price fluctuations be taken into account in formulating monetary policy. The main findings of this thesis are summarised as follows. Firstly, as in advanced countries, the four stylised facts regarding the policy interest rate path are found in Korea: infrequent changes in policy rates; successive changes in the same direction; asymmetric adjustments in terms of the size of interest-rate changes for continuation and reversal periods; and a long pause before reversals in policy rates. These patterns of policy rates (i.e., interest-rate smoothing) characterised the central bank‘s reaction to inflation and the output gap as being less aggressive than the optimising central bank behavior would predict (Chapter 3). Secondly, uncertainty may provide a rationale for a smoother path of the policy interest rate in Korea. In particular, since the introduction of the interest rate-oriented monetary policy, the actual call money rates have shown to be similar to the optimal rate path under parameter uncertainty. Gradual movements in the policy rates do not necessarily indicate that the central bank has an interest-rate smoothing incentive. Uncertainty about the dynamic structure of the economy, which is dubbed ‗parameter uncertainty‘, could account for a considerable portion of the observed gradual movements in policy interest rates (Chapter 4). Thirdly, it is found that the greater the output-gap uncertainty, the smaller the output-gap response coefficients in the optimal policy rules, and in a similar vein, the greater inflation uncertainty, the smaller the inflation response coefficients. The optimal policy rules derived by using data without errors showed the large size of the output-gap and inflation response coefficients. This finding confirms that data uncertainty can be one of sources explaining the reasons why monetary policymakers react less aggressively in setting their interest rate instrument (Chapter 5). Finally, we found that house prices conveyed some useful information on conditions such as possible financial instability and future inflation in Korea, and the house-price shock differed from other shocks to the macroeconomy in that it had persistent impacts on the economy, consequently provoking much larger economic volatility. Empirical simulations showed that the central bank could reduce its loss values in terms of economic volatility, resulting in promoting overall economic stability when it responds more directly to fluctuations in house prices. This finding provides the reason why the central bank should give more attention to house-price fluctuations when conducting monetary policy (Chapter 6).
218

Econometric Modeling vs Artificial Neural Networks : A Sales Forecasting Comparison

Bajracharya, Dinesh January 2011 (has links)
Econometric and predictive modeling techniques are two popular forecasting techniques. Both ofthese techniques have their own advantages and disadvantages. In this thesis some econometricmodels are considered and compared to predictive models using sales data for five products fromICA a Swedish retail wholesaler. The econometric models considered are regression model,exponential smoothing, and ARIMA model. The predictive models considered are artificialneural network (ANN) and ensemble of neural networks. Evaluation metrics used for thecomparison are: MAPE, WMAPE, MAE, RMSE, and linear correlation. The result of this thesisshows that artificial neural network is more accurate in forecasting sales of product. But it doesnot differ too much from linear regression in terms of accuracy. Therefore the linear regressionmodel which has the advantage of being comprehensible can be used as an alternative to artificialneural network. The results also show that the use of several metrics contribute in evaluatingmodels for forecasting sales. / Program: Magisterutbildning i informatik
219

Simulation of continuous damage and fracture in metal-forming processes with 3D mesh adaptive methodology / Simulation numérique d’endommagement continu et de fissure dans les procédés de mise en forme de métal avec 3D maillage méthodologie adaptatif

Yang, Fangtao 10 November 2017 (has links)
Ces travaux s'inscrivent dans le cadre des recherches menées dans le cadre d'une collaboration entre le laboratoire Roberval de l'Université de Technologie de Compiègne et l'équipe dans le cadre du projet ANR-14-CE07-0035 LASMIS de l'Institut Charles Delaunay de l'Université de Technologie de Troyes. Nous présentons dans ces travaux une h-méthodologie adaptative tridimensionnelle des éléments finis afin de représenter l'initiation et la propagation des fissures dans des matériaux ductiles. Un modèle élasto-plastique couplé à l'endommagement isotrope proposé par l'équipe du LASMIS/UTT est utilisé. Les applications visées à terme concernent principalement la mise en forme des métaux. Dans ce contexte, une formulation Lagrangienne actualisée est employée et des remaillages fréquents s'avèrent essentiels afin d'une part d'éviter les fortes distorsions d'éléments dues aux grandes déformations plastiques et d'autre part de suivre les modifications de la topologie résultant de la création de fissures. La taille du nouveau maillage doit permettre à moindre coût représenter avec précision l'évolution des gradients des quantités physiques représentatives des phénomènes étudiées (plasticité, endommagement...). Nous proposons des indicateurs empiriques de taille d'éléments basés sur la déformation plastique ainsi que sur l'endommagement. Une courbe définie par morceau représente l'évolution de la taille d'élément suivant la sévérité de la plasticité et le cas échéant de l'endommagement. Les fissures sont représentées par une méthode de destruction d'éléments qui permet une description aisée de la géométrie de ces dernières et une gestion simplifiée de la fissuration sans nul besoin de critères additionnels. En revanche, pour permettre une description réaliste des fissures, ces dernières doivent être représentées par l'érosion des éléments de plus petite taille. Un solveur ABAQUS/Explicit® est utilisé avec des éléments tétraédriques quadratiques (C3D10M) évitant notamment les problèmes de verrouillage numérique survenant lors de l'analyse de structures en matériau compressible ou quasi-incompressible. Le contrôle de la plus petite taille du maillage est important dans un contexte explicite. De surcroît, pour les phénomènes adoucissant, la solution dépend de la taille de maille considérée alors comme un paramètre intrinsèque. Une étude nous a permis de constater que lorsque le maillage est suffisamment raffiné, les effets de la dépendance au maillage se réduisaient. Dans la littérature, les coûts de maillage ou de remaillage fréquents sont souvent considérés comme prohibitifs et de nombreux auteurs s'appuient sur cet argument pour introduire, avec succès certes, des méthodes alternatives qui limitent le coût des opérations de remaillage sans toutefois les éliminer (XFEM par exemple). Nos travaux montrent que le coût d'un remaillage local est négligeable par rapport au calcul. Compte tenu de la complexité de la géométrie et de la nécessité de raffiner le maillage, la seule alternative à ce jour est d'utiliser un mailleur en tétraèdres. La stratégie de remaillage local en tétraèdre s'appuie sur une méthode de bisection suivie si nécessaire d'une optimisation locale du maillage proposé par A. Rassineux en 2003. Le remaillage, même local, doit s'accompagner de procédures de transfert de champ des variables nodales et aux points d'intégration. Les variables nodales sont, comme le fait la plupart des auteurs, transférées en utilisant les fonctions de forme éléments finis. Le transfert de champ en 3D aux points de Gauss et les nombreux problèmes sous-jacents ont été relativement peu abordés dans la littérature. / This work is part of the research carried out in the framework of a collaboration between the Roberval laboratory of the Compiègne University of Technology and the team within the framework of the project ANR-14-CE07-0035 LASMIS of the Charles Delaunay Institute of Technology University of Troyes. In this work, we present a three-dimensional adaptive Pi-methodology of finite elements to represent the initiation and propagation of cracks in ductile materials. An elastoplastic model coupled with the isotropic damage proposed by the LASMIS / UTT team is used. The targeted applications will mainly concern the metal forming. In this context, an updated Lagrangian formulation is used and frequent remeshing is essential in order to avoid the strong distortion of elements due to large plastic deformations and to follow the modifications of the topology resulting in the creation of cracks. The size of the new mesh must allow at a lower cost to accurately represent the evolution of the gradients of the physical quantities representative of the studied phenomena (plasticity, damage ...). We propose empirical indicators of size of elements based on the plastic deformation as well as on the damage. A piecewise defined curve represents the evolution of the element size according to the severity of the plasticity and, if appropriate, the damage. The cracks are represented by a method of destruction of elements which allows an easy description of the geometry and a simplified treatment of the cracking without any need for additional criteria. On the other hand, to allow a realistic description of the cracks, the latter must be represented by erosion smaller elements. An ABAQUS / Explicit@ solver is used with quadratic tetrahedral elements (C3DIOM), avoiding in particular the problems of numerical locking occurring during the analysis of structures in compressible or quasi-incompressible material. The control of the smaller mesh size is important in an explicit context. In addition, for softening phenomena, the solution depends on the mesh size considered as an intrinsic parameter. A study has shown that when the mesh is sufficiently refined, the effects of mesh dependence are reduced. In the literature, the costs of frequent meshing or remeshing are often considered prohibitive and many authors rely on this argument to introduce, with success, alternative methods that limit the cost of remeshing operations without eliminating them ( XFEM for example). Our work shows that the cost of local remeshing is negligible compared to the calculation. Given the complexity of the geometry and the need to refine the mesh, the only alternative to date is to use a mesh in tetrahedra. The strategy of local remeshing tetrahedron is based on a bisection method followed if necessary by a local optimization of the grid proposed by A. Rassineux in 2003. The remeshing, even local, must be accompanied by field transfer procedures on both nodal variables and integration points. Node variables are, as most authors do, transferred using finite element shape functions. The 3D field transfer at Gauss points and the many underlying problems have been relatively untouched in the literature. The main difficulties to be solved in order to ensure the "quality" of the transfer concern the limitation of numerical diffusion, the lack of information near borders, the respect of boundary conditions, the equilibrium, the calculation costs, the filtering of the information points, crucial problems in 3D where the number of Gauss points used is several hundred. We propose a so-called "hybrid" method which consists, initially, in extrapolating the data at the Gauss points, in the nodes by diffuse interpolation and then in using the finite element form functions to obtain the value at the point considered.
220

Correção de normais para suavização de nuvens de pontos / Normal correction towards smoothing point-based surfaces

Paola Tatiana Llerena Valdivia 08 November 2013 (has links)
Nos anos recentes, suavização de superfícies é um assunto de intensa pesquisa em processamento geométrico. Muitas das abordagens para suavização de malhas usam um esquema de duas etapas: filtragem de normais seguido de um passo de atualização de vértices para corresponder com as normais filtradas. Neste trabalho, propomos uma adaptação de tais esquemas de duas etapas para superfícies representadas por nuvens de pontos. Para isso, exploramos esquemas de pesos para filtrar as normais. Além disso, investigamos três métodos para estimar normais, analisando o impacto de cada método para estimar normais em todo o processo de suavização da superfície. Para uma análise quantitativa, além da comparação visual convencional, avaliamos a eficácia de diferentes opções de implementação usando duas medidas, comparando nossos resultados com métodos de suavização de nuvens de pontos encontrados a literatura / In the last years, surface denoising is a subject of intensive research in geometry processing. Most of the recent approaches for mesh denoising use a twostep scheme: normal filtering followed by a point updating step to match the corrected normals. In this work, we propose an adaptation of such two-step approaches for point-based surfaces, exploring three different weight schemes for filtering normals. Moreover, we also investigate three techniques for normal estimation, analyzing the impact of each normal estimation method in the whole point-set smoothing process. Towards a quantitative analysis, in addition to conventional visual comparison, we evaluate the effectiveness of different choices of implementation using two measures, comparing our results against state-of-art point-based denoising techniques. Keywords: surface smoothing; point-based surface; normal estimation; normal filtering.

Page generated in 0.2844 seconds