• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 117
  • 102
  • 48
  • 33
  • 25
  • 14
  • 13
  • 13
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 975
  • 135
  • 120
  • 111
  • 99
  • 86
  • 82
  • 73
  • 72
  • 71
  • 71
  • 71
  • 70
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

The Research of Very Low Bit-Rate and Scalable Video Compression Using Cubic-Spline Interpolation

Wang, Chih-Cheng 18 June 2001 (has links)
This thesis applies the one-dimensional (1-D) and two-dimensional (2-D) cubic-spline interpolation (CSI) schemes to MPEG standard for very low-bit rate video coding. In addition, the CSI scheme is used to implement the scalable video compression scheme in this thesis. The CSI scheme is based on the least-squares method with a cubic convolution function. It has been shown that the CSI scheme yields a very accurate algorithm for smoothing and obtains a better quality of reconstructed image than linear interpolation, linear-spline interpolation, cubic convolution interpolation, and cubic B-spline interpolation. In order to obtain a very low-bit rate video, the CSI scheme is used along with the MPEG-1 standard for video coding. Computer simulations show that this modified MPEG not only avoids the blocking effect caused by MPEG at high compression ratio but also gets a very low-bit rate video coding scheme that still maintains a reasonable video quality. Finally, the CSI scheme is also used to achieve the scalable video compression. This new scalable video compression scheme allows the data rate to be dynamically changed by the CSI scheme, which is very useful when operates under communication networks with different transmission capacities.
382

Analysis, comparison and modification of various Particle Image Velocimetry (PIV) algorithms

Estrada Perez, Carlos Eduardo 17 February 2005 (has links)
A program based on particle tracking velocimetry (PTV) was developed in this work. The program was successfully validated by means of artificial images where parameters such as radius, concentration, and noise were varied in order to test their influence on the results. This program uses the mask cross correlation technique for particle centroid location. The sub-pixel accuracy is achieved using two different methods, the three point Gaussian interpolation method and the center of gravity method. The second method is only used if the first method fails. The object matching algorithm between frames uses cross correlation with a non binarized image. A performance comparison between different particle image velocimetry (PIV) and PTV algorithms was done using the international standard PIV challenge artificial images. The best performance was obtained by the program developed in this work. It showed the best accuracy, and the best spatial resolution by finding the larger number of correct vectors of all algorithm tested. A procedure is proposed to obtain error estimates for real images based on errors calculated with experimental ones. Using this procedure a real PIV image with 20% noise has an estimated average error of 0.1 pixel. Results of the analysis of 200 experimental images are shown for the two best PTV algorithms.
383

Operator valued Hardy spaces and related subjects

Mei, Tao 30 October 2006 (has links)
We give a systematic study of the Hardy spaces of functions with values in the non-commutative Lp-spaces associated with a semifinite von Neumann algebra M. This is motivated by matrix valued harmonic analysis (operator weighted norm inequalities, operator Hilbert transform), as well as by the recent development of non-commutative martingale inequalities. Our non-commutative Hardy spaces are defined by non-commutative Lusin integral functions. It is proved in this dissertation that they are equivalent to those defined by the non-commutative Littlewood-Paley G-functions. We also study the Lp boundedness of operator valued dyadic paraproducts and prove that their Lq boundedness implies their Lp boundedness for all 1 < q < p < ∞.
384

Computer and physical experiments: design, modeling, and multivariate interpolation

Kang, Lulu 28 June 2010 (has links)
Many problems in science and engineering are solved through experimental investigations. Because experiments can be costly and time consuming, it is important to efficiently design the experiment so that maximum information about the problem can be obtained. It is also important to devise efficient statistical methods to analyze the experimental data so that none of the information is lost. This thesis makes contributions on several aspects in the field of design and analysis of experiments. It consists of two parts. The first part focuses on physical experiments, and the second part on computer experiments. The first part on physical experiments contains three works. The first work develops Bayesian experimental designs for robustness studies, which can be applied in industries for quality improvement. The existing methods rely on modifying effect hierarchy principle to give more importance to control-by-noise interactions, which can violate the true effect order of a system because the order should not depend on the objective of an experiment. The proposed Bayesian approach uses a prior distribution to capture the effect hierarchy property and then uses an optimal design criterion to satisfy the robustness objectives. The second work extends the above Bayesian approach to blocked experimental designs. The third work proposes a new modeling and design strategy for mixture-of-mixtures experiments and applies it in the optimization of Pringles potato crisps. The proposed model substantially reduces the number of parameters in the existing multiple-Scheffé model and thus, helps the engineers to design much smaller experiments. The second part on computer experiments introduces two new methods for analyzing the data. The first is an interpolation method called regression-based inverse distance weighting (RIDW) method, which is shown to overcome some of the computational and numerical problems associated with kriging, particularly in dealing with large data and/or high dimensional problems. In the second work, we introduce a general nonparametric regression method, called kernel sum regression. More importantly, we make an interesting discovery by showing that a particular form of this regression method becomes an interpolation method, which can be used to analyze computer experiments with deterministic outputs.
385

Modélisation de l'écoulement polyphasique à l'intérieur et en sortie des injecteurs Diesel

Moreau, Jean-Baptiste 14 December 2005 (has links) (PDF)
Les normes d'émission de polluants concernant les véhicules poussent les constructeurs automobiles à s'intéresser à l'injection Diesel haute pression et au phénomène de cavitation qui y tient un rôle prépondérant. En ce domaine, la simulation numérique est un moyen d'investigation puissant et économique. Un modèle polyphasique homogène a été développé : il considère un mélange de carburant (constitué de liquide et/ou de vapeur) et de gaz. Il est basé sur une équation d'état construite par tabulation entre une loi barotrope pour le carburant et la loi des gaz parfaits pour le gaz. La validité de l'approche est testée sur un cas d'implosion de bulle et sur des cas 2D classiques d'injection. Des calculs 3D d'injecteurs réalistes mettent en évidence l'influence de la cavitation et des écoulements secondaires, à l'intérieur de l'orifice de l'injecteur, sur la déstabilisation du jet et l'atomisation primaire du coeur liquide.
386

On the convergence of random functions defined by interpolation

Starkloff, Hans-Jörg, Richter, Matthias, vom Scheidt, Jürgen, Wunderlich, Ralf 31 August 2004 (has links) (PDF)
In the paper we study sequences of random functions which are defined by some interpolation procedures for a given random function. We investigate the problem in what sense and under which conditions the sequences converge to the prescribed random function. Sufficient conditions for convergence of moment characteristics, of finite dimensional distributions and for weak convergence of distributions in spaces of continuous functions are given. The treatment of such questions is stimulated by an investigation of Monte Carlo simulation procedures for certain classes of random functions. In an appendix basic facts concerning weak convergence of probability measures in metric spaces are summarized.
387

Clément-type interpolation on spherical domains - interpolation error estimates and application to a posteriori error estimation

Apel, Thomas, Pester, Cornelia 31 August 2006 (has links) (PDF)
In this paper, a mixed boundary value problem for the Laplace-Beltrami operator is considered for spherical domains in $R^3$, i.e. for domains on the unit sphere. These domains are parametrized by spherical coordinates (\varphi, \theta), such that functions on the unit sphere are considered as functions in these coordinates. Careful investigation leads to the introduction of a proper finite element space corresponding to an isotropic triangulation of the underlying domain on the unit sphere. Error estimates are proven for a Clément-type interpolation operator, where appropriate, weighted norms are used. The estimates are applied to the deduction of a reliable and efficient residual error estimator for the Laplace-Beltrami operator.
388

The ITL programming interface toolkit

Randrianarivony, Maharavo 27 February 2007 (has links) (PDF)
This document serves as a reference for the beta version of our evaluation library ITL. First, it describes a library which gives an easy way for programmers to evaluate the 3D image and the normal vector corresponding to a parameter value which belongs to the unit square. The API functions which are described in this document let programmers make those evaluations without the need to understand the underlying CAD complica- tions. As a consequence, programmers can concentrate on their own scien- tific interests. Our second objective is to describe the input which is a set of parametric four-sided surfaces that have the structure required by some integral equation solvers.
389

Software pertaining to the preparation of CAD data from IGES interface for mesh-free and mesh-based numerical solvers

Randrianarivony, Maharavo 27 February 2007 (has links) (PDF)
We focus on the programming aspect of the treatment of digitized geometries for subsequent use in mesh-free and mesh-based numerical solvers. That perspective includes the description of our C/C++ implementations which use OpenGL for the visualization and MFC classes for the user interface. We report on our experience about implementing with the IGES interface which serves as input for storage of geometric information. For mesh-free numerical solvers, it is helpful to decompose the boundary of a given solid into a set of four-sided surfaces. Additionally, we will describe the treatment of diffeomorphisms on four-sided domains by using transfinite interpolations. In particular, Coons and Gordon patches are appropriate for dealing with such mappings when the equations of the delineating curves are explicitly known. On the other hand, we show the implementation of the mesh generation algorithms which invoke the Laplace-Beltrami operator. We start from coarse meshes which one refine according to generalized Delaunay techniques. Our software is also featured by its ability of treating assembly of solids in B-Rep scheme.
390

Robust gesture recognition

Cheng, You-Chi 08 June 2015 (has links)
It is a challenging problem to make a general hand gesture recognition system work in a practical operation environment. In this study, it is mainly focused on recognizing English letters and digits performed near the steering wheel of a car and captured by a video camera. Like most human computer interaction (HCI) scenarios, the in-car gesture recognition suffers from various robustness issues, including multiple human factors and highly varying lighting conditions. It therefore brings up quite a few research issues to be addressed. First, multiple gesturing alternatives may share the same meaning, which is not typical in most previous systems. Next, gestures may not be the same as expected because users cannot see what exactly has been written, which increases the gesture diversity significantly.In addition, varying illumination conditions will make hand detection trivial and thus result in noisy hand gestures. And most severely, users will tend to perform letters at a fast pace, which may result in lack of frames for well-describing gestures. Since users are allowed to perform gestures in free-style, multiple alternatives and variations should be considered while modeling gestures. The main contribution of this work is to analyze and address these challenging issues step-by-step such that eventually the robustness of the whole system can be effectively improved. By choosing color-space representation and performing the compensation techniques for varying recording conditions, the hand detection performance for multiple illumination conditions is first enhanced. Furthermore, the issues of low frame rate and different gesturing tempo will be separately resolved via the cubic B-spline interpolation and i-vector method for feature extraction. Finally, remaining issues will be handled by other modeling techniques such as sub-letter stroke modeling. According to experimental results based on the above strategies, the proposed framework clearly improved the system robustness and thus encouraged the future research direction on exploring more discriminative features and modeling techniques.

Page generated in 0.0163 seconds