Spelling suggestions: "subject:"interpolation."" "subject:"anterpolation.""
381 |
The Application Of Disaggregation Methods To The Unemployment Rate Of TurkeyTuker, Utku Goksel 01 September 2010 (has links) (PDF)
Modeling and forecasting of the unemployment rate of a country is very important to be able to take precautions on the governmental policies. The available unemployment rate data of Turkey provided by the Turkish Statistical Institute (TURKSTAT) are not in suitable format to have a time series model. The unemployment rate data between 1988 and 2009 create a problem of building a reliable time series model due to the insufficient number and irregular form of observations. The application of disaggregation methods to some parts of the unemployment rate data enables us to fit an appropriate time series model and to have forecasts as a result of the suggested model.
|
382 |
The Research of Very Low Bit-Rate and Scalable Video Compression Using Cubic-Spline InterpolationWang, Chih-Cheng 18 June 2001 (has links)
This thesis applies the one-dimensional (1-D) and two-dimensional (2-D) cubic-spline interpolation (CSI) schemes to MPEG standard for very low-bit rate video coding. In addition, the CSI scheme is used to implement the scalable video compression scheme in this thesis.
The CSI scheme is based on the least-squares method with a cubic convolution function. It has been shown that the CSI scheme yields a very accurate algorithm for smoothing and obtains a better quality of reconstructed image than linear interpolation, linear-spline interpolation, cubic convolution interpolation, and cubic B-spline interpolation.
In order to obtain a very low-bit rate video, the CSI scheme is used along with the MPEG-1 standard for video coding. Computer simulations show that this modified MPEG not only avoids the blocking effect caused by MPEG at high compression ratio but also gets a very low-bit rate video coding scheme that still maintains a reasonable video quality. Finally, the CSI scheme is also used to achieve the scalable video compression. This new scalable video compression scheme allows the data rate to be dynamically changed by the CSI scheme, which is very useful when operates under communication networks with different transmission capacities.
|
383 |
Analysis, comparison and modification of various Particle Image Velocimetry (PIV) algorithmsEstrada Perez, Carlos Eduardo 17 February 2005 (has links)
A program based on particle tracking velocimetry (PTV) was developed in this work. The
program was successfully validated by means of artificial images where parameters such as radius,
concentration, and noise were varied in order to test their influence on the results. This program
uses the mask cross correlation technique for particle centroid location. The sub-pixel accuracy is
achieved using two different methods, the three point Gaussian interpolation method and the center
of gravity method. The second method is only used if the first method fails. The object matching
algorithm between frames uses cross correlation with a non binarized image.
A performance comparison between different particle image velocimetry (PIV) and PTV algorithms
was done using the international standard PIV challenge artificial images. The best
performance was obtained by the program developed in this work. It showed the best accuracy,
and the best spatial resolution by finding the larger number of correct vectors of all algorithm
tested.
A procedure is proposed to obtain error estimates for real images based on errors calculated
with experimental ones. Using this procedure a real PIV image with 20% noise has an estimated
average error of 0.1 pixel.
Results of the analysis of 200 experimental images are shown for the two best PTV algorithms.
|
384 |
Operator valued Hardy spaces and related subjectsMei, Tao 30 October 2006 (has links)
We give a systematic study of the Hardy spaces of functions with values in
the non-commutative Lp-spaces associated with a semifinite von Neumann algebra
M. This is motivated by matrix valued harmonic analysis (operator weighted norm
inequalities, operator Hilbert transform), as well as by the recent development of
non-commutative martingale inequalities. Our non-commutative Hardy spaces are
defined by non-commutative Lusin integral functions. It is proved in this dissertation
that they are equivalent to those defined by the non-commutative Littlewood-Paley
G-functions.
We also study the Lp boundedness of operator valued dyadic paraproducts and
prove that their Lq boundedness implies their Lp boundedness for all 1 < q < p < âÂÂ.
|
385 |
Computer and physical experiments: design, modeling, and multivariate interpolationKang, Lulu 28 June 2010 (has links)
Many problems in science and engineering are solved through experimental investigations. Because experiments can be costly and time consuming, it is important to efficiently design the experiment so that maximum information about the problem can be obtained. It is also important to devise efficient statistical methods to analyze the experimental data so that none of the information is lost. This thesis makes contributions on several aspects in the field of design and analysis of experiments. It consists of two parts. The first part focuses on physical experiments, and the second part on computer experiments.
The first part on physical experiments contains three works. The first work develops Bayesian experimental designs for robustness studies, which can be applied in industries for quality improvement. The existing methods rely on modifying effect hierarchy principle to give more importance to control-by-noise interactions, which can violate the true effect order of a system because the order should not depend on the objective of an experiment. The proposed Bayesian approach uses a prior distribution to capture the effect hierarchy property and then uses an optimal design criterion to satisfy the robustness objectives. The second work extends the above Bayesian approach to blocked experimental designs. The third work proposes a new modeling and design strategy for mixture-of-mixtures experiments and applies it in the optimization of Pringles potato crisps. The proposed model substantially reduces the number of parameters in the existing multiple-Scheffé model and thus, helps the engineers to design much smaller experiments.
The second part on computer experiments introduces two new methods for analyzing the data. The first is an interpolation method called regression-based inverse distance weighting (RIDW) method, which is shown to overcome some of the computational and numerical problems associated with kriging, particularly in dealing with large data and/or high dimensional problems. In the second work, we introduce a general nonparametric regression method, called kernel sum regression. More importantly, we make an interesting discovery by showing that a particular form of this regression method becomes an interpolation method, which can be used to analyze computer experiments with deterministic outputs.
|
386 |
Modélisation de l'écoulement polyphasique à l'intérieur et en sortie des injecteurs DieselMoreau, Jean-Baptiste 14 December 2005 (has links) (PDF)
Les normes d'émission de polluants concernant les véhicules poussent les constructeurs automobiles à s'intéresser à l'injection Diesel haute pression et au phénomène de cavitation qui y tient un rôle prépondérant. En ce domaine, la simulation numérique est un moyen d'investigation puissant et économique. Un modèle polyphasique homogène a été développé : il considère un mélange de carburant (constitué de liquide et/ou de vapeur) et de gaz. Il est basé sur une équation d'état construite par tabulation entre une loi barotrope pour le carburant et la loi des gaz parfaits pour le gaz. La validité de l'approche est testée sur un cas d'implosion de bulle et sur des cas 2D classiques d'injection. Des calculs 3D d'injecteurs réalistes mettent en évidence l'influence de la cavitation et des écoulements secondaires, à l'intérieur de l'orifice de l'injecteur, sur la déstabilisation du jet et l'atomisation primaire du coeur liquide.
|
387 |
On the convergence of random functions defined by interpolationStarkloff, Hans-Jörg, Richter, Matthias, vom Scheidt, Jürgen, Wunderlich, Ralf 31 August 2004 (has links) (PDF)
In the paper we study sequences of random functions which are defined by some
interpolation procedures for a given random function. We investigate the problem
in what sense and under which conditions the sequences converge to the prescribed
random function. Sufficient conditions for convergence of moment characteristics, of
finite dimensional distributions and for weak convergence of distributions in spaces
of continuous functions are given. The treatment of such questions is stimulated by
an investigation of Monte Carlo simulation procedures for certain classes of random
functions.
In an appendix basic facts concerning weak convergence of probability measures
in metric spaces are summarized.
|
388 |
Clément-type interpolation on spherical domains - interpolation error estimates and application to a posteriori error estimationApel, Thomas, Pester, Cornelia 31 August 2006 (has links) (PDF)
In this paper, a mixed boundary value problem for
the Laplace-Beltrami operator is considered for
spherical domains in $R^3$, i.e. for domains on
the unit sphere. These domains are parametrized
by spherical coordinates (\varphi, \theta),
such that functions on the unit sphere are
considered as functions in these coordinates.
Careful investigation leads to the introduction
of a proper finite element space corresponding to
an isotropic triangulation of the underlying
domain on the unit sphere. Error estimates are
proven for a Clément-type interpolation operator,
where appropriate, weighted norms are used.
The estimates are applied to the deduction of
a reliable and efficient residual error estimator
for the Laplace-Beltrami operator.
|
389 |
The ITL programming interface toolkitRandrianarivony, Maharavo 27 February 2007 (has links) (PDF)
This document serves as a reference for the beta version of our evaluation
library ITL. First, it describes a library which gives an easy way for
programmers to evaluate the 3D image and the normal vector corresponding to
a parameter value which belongs to the unit square. The API functions which
are described in this document let programmers make those
evaluations without the need to understand the underlying CAD complica-
tions. As a consequence, programmers can concentrate on their own scien-
tific interests. Our second objective is to describe the input which is a set
of parametric four-sided surfaces that have the structure required by some
integral equation solvers.
|
390 |
Software pertaining to the preparation of CAD data from IGES interface for mesh-free and mesh-based numerical solversRandrianarivony, Maharavo 27 February 2007 (has links) (PDF)
We focus on the programming aspect of the treatment of digitized
geometries for subsequent use in mesh-free and mesh-based numerical
solvers. That perspective includes the description of our C/C++ implementations
which use OpenGL for the visualization and MFC classes for the user
interface. We report on our experience about implementing with the IGES
interface which serves as input for storage of geometric information. For
mesh-free numerical solvers, it is helpful to decompose the boundary of a
given solid into a set of four-sided surfaces. Additionally, we will describe
the treatment of diffeomorphisms on four-sided domains by using transfinite
interpolations. In particular, Coons and Gordon patches are appropriate for
dealing with such mappings when the equations of the delineating curves
are explicitly known. On the other hand, we show the implementation of
the mesh generation algorithms which invoke the Laplace-Beltrami operator.
We start from coarse meshes which one refine according to generalized
Delaunay techniques. Our software is also featured by its ability of treating
assembly of solids in B-Rep scheme.
|
Page generated in 0.4515 seconds