• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 460
  • 63
  • 56
  • 56
  • 53
  • 48
  • 44
  • 43
  • 41
  • 39
  • 37
  • 37
  • 35
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Surface Mesh Generation using Curvature-Based Refinement

Sinha, Bhaskar 13 December 2002 (has links)
Surface mesh generation is a critical component of the mesh generation process. The objective of the described effort was to determine if a combination of constrained Delaunay triangulation (for triangles), advancing front method (for quadrilaterals), curvature-based refinement, smoothing, and reconnection is a viable approach for discretizing a NURBS patch holding the boundary nodes fixed. The approach is significant when coupled with recently developed geometry specification that explicitly identifies common edges. This thesis describes the various techniques used to achieve the above objectives. Application of this approach to several representative geometries demonstrates that it is an effective alternative to traditional approaches.
82

Real-Time Water Depth Logger Data as Input to PCSWMM to Estimate Tree Filter Performance

Ertezaei, Bahareh January 2017 (has links)
No description available.
83

An Improved 2D Adaptive Smoothing Algorithm in Image Noise Removal and Feature Preservation

Hu, Xin 17 April 2009 (has links)
No description available.
84

Longitudinal Regression Analysis Using Varying Coefficient Mixed Effect Model

Al-Shaikh, Enas 15 October 2012 (has links)
No description available.
85

Evaluating time-series smoothing algorithms for multi-temporal land cover classification

Wheeler, Brandon Myles 23 July 2015 (has links)
In this study we applied the asymmetric Gaussian, double-logistic, and Savitzky-Golay filters to MODIS time-series NDVI data to compare the capability of smoothing algorithms in noise reduction for improving land cover classification in the Great Lakes Basin, and providing groundwork to support cyanobacteria and cyanotoxin monitoring efforts. We used inter-class separability and intra-class variability, at varying levels of pixel homogeneity, to evaluate the effectiveness of three smoothing algorithms. Based on these initial tests, the algorithm which returned the best results was used to analyze how image stratification by ecoregion can affect filter performance. MODIS 16-day 250m NDVI imagery of the Great Lakes Basin from 2001-2013 were used in conjunction with National Land Cover Database (NLCD) 2006 and 2011 data, and Cropland Data Layers (CDL) from 2008 to 2013 to conduct these evaluations. Inter-class separability was measured by Jeffries-Matusita (JM) distances between selected land cover classes (both general classes and specific crops), and intra-class variability was measured by calculating simple Euclidean distance for samples within a land cover class. Within the study area, it was found that the application of a smoothing algorithm can significantly reduce image noise, improving both inter-class separability and intra-class variability when compared to the raw data. Of the three filters examined, the asymmetric Gaussian filter consistently returned the highest values of interclass separability, while all three filters performed very similarly for within-class variability. The ecoregion analysis based on the asymmetric Gaussian dataset indicated that the scale of study area can heavily impact within-class separability. The criteria we established have potential for furthering our understanding of the strengths and weaknesses of different smoothing algorithms, thereby improving pre-processing decisions for land cover classification using time-series data. / Master of Science
86

Some contributions to latin hypercube design, irregular region smoothing and uncertainty quantification

Xie, Huizhi 21 May 2012 (has links)
In the first part of the thesis, we propose a new class of designs called multi-layer sliced Latin hypercube design (DSLHD) for running computer experiments. A general recursive strategy for constructing MLSLHD has been developed. Ordinary Latin hypercube designs and sliced Latin hypercube designs are special cases of MLSLHD with zero and one layer respectively. A special case of MLSLHD with two layers, doubly sliced Latin hypercube design, is studied in detail. The doubly sliced structure of DSLHD allows more flexible batch size than SLHD for collective evaluation of different computer models or batch sequential evaluation of a single computer model. Both finite-sample and asymptotical sampling properties of DSLHD are examined. Numerical experiments are provided to show the advantage of DSLHD over SLHD for both sequential evaluating a single computer model and collective evaluation of different computer models. Other applications of DSLHD include design for Gaussian process modeling with quantitative and qualitative factors, cross-validation, etc. Moreover, we also show the sliced structure, possibly combining with other criteria such as distance-based criteria, can be utilized to sequentially sample from a large spatial data set when we cannot include all the data points for modeling. A data center example is presented to illustrate the idea. The enhanced stochastic evolutionary algorithm is deployed to search for optimal design. In the second part of the thesis, we propose a new smoothing technique called completely-data-driven smoothing, intended for smoothing over irregular regions. The idea is to replace the penalty term in the smoothing splines by its estimate based on local least squares technique. A close form solution for our approach is derived. The implementation is very easy and computationally efficient. With some regularity assumptions on the input region and analytical assumptions on the true function, it can be shown that our estimator achieves the optimal convergence rate in general nonparametric regression. The algorithmic parameter that governs the trade-off between the fidelity to the data and the smoothness of the estimated function is chosen by generalized cross validation (GCV). The asymptotic optimality of GCV for choosing the algorithm parameter in our estimator is proved. Numerical experiments show that our method works well for both regular and irregular region smoothing. The third part of the thesis deals with uncertainty quantification in building energy assessment. In current practice, building simulation is routinely performed with best guesses of input parameters whose true value cannot be known exactly. These guesses affect the accuracy and reliability of the outcomes. There is an increasing need to perform uncertain analysis of those input parameters that are known to have a significant impact on the final outcome. In this part of the thesis, we focus on uncertainty quantification of two microclimate parameters: the local wind speed and the wind pressure coefficient. The idea is to compare the outcome of the standard model with that of a higher fidelity model. Statistical analysis is then conducted to build a connection between these two. The explicit form of statistical models can facilitate the improvement of the corresponding modules in the standard model.
87

Smoothing And Differentiation Of Dynamic Data

Titrek, Fatih 01 May 2010 (has links) (PDF)
Smoothing is an important part of the pre-processing step in Signal Processing. A signal, which is purified from noise as much as possible, is necessary to achieve our aim. There are many smoothing algorithms which give good result on a stationary data, but these smoothing algorithms don&rsquo / t give expected result in a non-stationary data. Studying Acceleration data is an effective method to see whether the smoothing is successful or not. The small part of the noise that takes place in the Displacement data will affect our Acceleration data, which are obtained by taking the second derivative of the Displacement data, severely. In this thesis, some linear and non-linear smoothing algorithms will be analyzed in a non-stationary dataset.
88

Smoothing and differentiation of dynamic data

Titrek, Fatih 01 May 2010 (has links) (PDF)
Smoothing is an important part of the pre-processing step in Signal Processing. A signal, which is purified from noise as much as possible, is necessary to achieve our aim. There are many smoothing algorithms which give good result on a stationary data, but these smoothing algorithms don&rsquo / t give expected result in a non-stationary data. Studying Acceleration data is an effective method to see whether the smoothing is successful or not. The small part of the noise that takes place in the Displacement data will affect our Acceleration data, which are obtained by taking the second derivative of the Displacement data, severely. In this thesis, some linear and non-linear smoothing algorithms will be analyzed in a non-stationary data set.
89

Color Image Processing based on Graph Theory

Pérez Benito, Cristina 22 July 2019 (has links)
[ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas. / [CAT] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes. / [EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view. / Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955 / TESIS
90

Prognostisering av försäkringsärenden : Hur brytpunktsdetektion och effekter av historiska lag– och villkorsförändringar kan användas i utvecklingen av prognosarbete / Forecasting of insurance claims : How breakpoint detection and effects of historical legal and policy changes can be used in the development of forecasting

Tengborg, Sebastian, Widén, Joakim January 2013 (has links)
I denna rapport presenteras ett tillvägagångssätt för att hitta och datera brytpunkter i tidsserier. En brytpunkt definieras av det datum då det skett en stor nivåförändring i tidsserien. Det presenteras även en strategi för att skatta effekten av daterade brytpunkter. Genom att analysera tidsserier över AFA Försäkrings ärendeinflöde visar det sig att brytpunkter i tidsserien sammanfaller med exogena händelser som kan ha orsakat dessa brytpunkter, till exempel villkors- eller lagförändringar inom försäkringsbranschen. Rapporten visar att det genom ett metodiskt angreppssätt går att skatta effekten av en exogen händelse. Dessa skattade effekter kan användas vid framtida prognoser då en liknande förändring förväntas inträffa. Dessutom skapas prognoser över ärendeinflödet två år framåt med olika tidsseriemodeller.

Page generated in 0.0662 seconds