• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Research of Very Low Bit-Rate and Scalable Video Compression Using Cubic-Spline Interpolation

Wang, Chih-Cheng 18 June 2001 (has links)
This thesis applies the one-dimensional (1-D) and two-dimensional (2-D) cubic-spline interpolation (CSI) schemes to MPEG standard for very low-bit rate video coding. In addition, the CSI scheme is used to implement the scalable video compression scheme in this thesis. The CSI scheme is based on the least-squares method with a cubic convolution function. It has been shown that the CSI scheme yields a very accurate algorithm for smoothing and obtains a better quality of reconstructed image than linear interpolation, linear-spline interpolation, cubic convolution interpolation, and cubic B-spline interpolation. In order to obtain a very low-bit rate video, the CSI scheme is used along with the MPEG-1 standard for video coding. Computer simulations show that this modified MPEG not only avoids the blocking effect caused by MPEG at high compression ratio but also gets a very low-bit rate video coding scheme that still maintains a reasonable video quality. Finally, the CSI scheme is also used to achieve the scalable video compression. This new scalable video compression scheme allows the data rate to be dynamically changed by the CSI scheme, which is very useful when operates under communication networks with different transmission capacities.
12

A DEVELOPMENT OF A COMPUTER AIDED GRAPHIC USER INTERFACE POSTPROCESSOR FOR ROTOR BEARING SYSTEMS

Arise, Pavan Kumar 01 January 2004 (has links)
Rotor dynamic analysis, which requires extensive amount of data and rigorous analytical processing, has been eased by the advent of powerful and affordable digital computers. By incorporating the processor and a graphical interface post processor in a single set up, this program offers a consistent and efficient approach to rotor dynamic analysis. The graphic user interface presented in this program effectively addresses the inherent complexities of rotor dynamic analyses by linking the required computational algorithms together to constitute a comprehensive program by which input data and the results are exchanged, analyzed and graphically plotted with minimal effort by the user. Just by selecting an input file and appropriate options as required, the user can carry out a comprehensive rotor dynamic analysis (synchronous response, stability analysis, critical speed analysis with undamped map) of a particular design and view the results with several options to save the plots for further verification. This approach helps the user to modify the design of turbomachinery quickly, until an efficient design is reached, with minimal compromise in all aspects.
13

A Comparative Study of American Option Valuation and Computation

Rodolfo, Karl January 2007 (has links)
Doctor of Philosophy (PhD) / For many practitioners and market participants, the valuation of financial derivatives is considered of very high importance as its uses range from a risk management tool, to a speculative investment strategy or capital enhancement. A developing market requires efficient but accurate methods for valuing financial derivatives such as American options. A closed form analytical solution for American options has been very difficult to obtain due to the different boundary conditions imposed on the valuation problem. Following the method of solving the American option as a free boundary problem in the spirit of the "no-arbitrage" pricing framework of Black-Scholes, the option price and hedging parameters can be represented as an integral equation consisting of the European option value and an early exercise value dependent upon the optimal free boundary. Such methods exist in the literature and along with risk-neutral pricing methods have been implemented in practice. Yet existing methods are accurate but inefficient, or accuracy has been compensated for computational speed. A new numerical approach to the valuation of American options by cubic splines is proposed which is proven to be accurate and efficient when compared to existing option pricing methods. Further comparison is made to the behaviour of the American option's early exercise boundary with other pricing models.
14

Bivariate C<sup>1</sup> Cubic Spline Space Over a Nonuniform Type-2 Triangulation and Its Subspaces With Boundary Conditions

Liu, Huan Wen, Hong, Don, Cao, Dun Qian 01 June 2005 (has links)
In this paper, we discuss the algebraic structure of bivariate C 1 cubic spline spaces over nonuniform type-2 triangulation and its subspaces with boundary conditions. The dimensions of these spaces are determined and their local support bases are constructed.
15

Bivariate C1 Cubic Spline Spaces Over Even Stratified Triangulations

Liu, Huan Wen, Hong, Don 01 December 2002 (has links)
It is well-known that the basic properties of a bivariate spline space such as dimension and approximation order depend on the geometric structure of the partition. The dependence of geometric structure results in the fact that the dimension of a C1 cubic spline space over an arbitrary triangulation becomes a well-known open problem. In this paper, by employing a new group of smoothness conditions and conformality conditions, we determine the dimension of bivariate C1 cubic spline spaces over a so-called even stratified triangulation.
16

Systematic Digitized Treatment of Engineering Line-Diagrams

Sui, T.Z., Qi, Hong Sheng, Qi, Q., Wang, L., Sun, J.W. 05 1900 (has links)
Yes / In engineering design, there are many functional relationships which are difficult to express into a simple and exact mathematical formula. Instead they are documented within a form of line graphs (or plot charts or curve diagrams) in engineering handbooks or text books. Because the information in such a form cannot be used directly in the modern computer aided design (CAD) process, it is necessary to find a way to numerically represent the information. In this paper, a data processing system for numerical representation of line graphs in mechanical design is developed, which incorporates the process cycle from the initial data acquisition to the final output of required information. As well as containing the capability for curve fitting through Cubic spline and Neural network techniques, the system also adapts a novel methodology for use in this application: Grey Models. Grey theory have been used in various applications, normally involved with time-series data, and have the characteristic of being able to handle sparse data sets and data forecasting. Two case studies were then utilized to investigate the feasibility of Grey models for curve fitting. Furthermore, comparisons with the other two established techniques show that the accuracy was better than the Cubic spline function method, but slightly less accurate than the Neural network method. These results are highly encouraging and future work to fully investigate the capability of Grey theory, as well as exploiting its sparse data handling capabilities is recommended.
17

Flickering Analysis of CH Cygni Using Kepler Data

Dingus, Thomas Holden 01 August 2016 (has links)
Utilizing data from the Kepler Mission, we analyze a flickering phenomenon in the symbiotic variable star CH Cygni. We perform a spline interpolation of an averaged lightcurve and subtract the spline to acquire residual data. This allows us to analyze the deviations that are not caused by the Red Giant’s semi-regular periodic variations. We then histogram the residuals and perform moment calculations for variance, skewness, and kurtosis for the purpose of determining the nature of the flickering. Our analysis has shown that we see a much smaller scale flickering than observed in the previous literature. Our flickering scale is on the scale of fractions of a percent of the luminosity. Also, from our analysis, we are very confident that the flickering is a product of the accretion disc of the White Dwarf.
18

Hidden Markov model with application in cell adhesion experiment and Bayesian cubic splines in computer experiments

Wang, Yijie Dylan 20 September 2013 (has links)
Estimation of the number of hidden states is challenging in hidden Markov models. Motivated by the analysis of a specific type of cell adhesion experiments, a new frame-work based on hidden Markov model and double penalized order selection is proposed. The order selection procedure is shown to be consistent in estimating the number of states. A modified Expectation-Maximization algorithm is introduced to efficiently estimate parameters in the model. Simulations show that the proposed framework outperforms existing methods. Applications of the proposed methodology to real data demonstrate the accuracy of estimating receptor-ligand bond lifetimes and waiting times which are essential in kinetic parameter estimation. The second part of the thesis is concerned with prediction of a deterministic response function y at some untried sites given values of y at a chosen set of design sites. The intended application is to computer experiments in which y is the output from a computer simulation and each design site represents a particular configuration of the input variables. A Bayesian version of the cubic spline method commonly used in numerical analysis is proposed, in which the random function that represents prior uncertainty about y is taken to be a specific stationary Gaussian process. An MCMC procedure is given for updating the prior given the observed y values. Simulation examples and a real data application are given to compare the performance of the Bayesian cubic spline with that of two existing methods.
19

Combining scientific computing and machine learning techniques to model longitudinal outcomes in clinical trials.

Subramanian, Harshavardhan January 2021 (has links)
Scientific machine learning (SciML) is a new branch of AI research at the edge of scientific computing (Sci) and machine learning (ML). It deals with efficient amalgamation of data-driven algorithms along with scientific computing to discover the dynamics of the time-evolving process. The output of such algorithms is represented in the form of a governing equation(s) (e.g., ordinary differential equation(s), ODE(s)), which one can solve then for any time point and, thus, obtain a rigorous prediction.  In this thesis, we present a methodology on how to incorporate the SciML approach in the context of clinical trials to predict IPF disease progression in the form of governing equation. Our proposed methodology also quantifies the uncertainties associated with the model by fitting 95\% high density interval (HDI) for the ODE parameters and 95\% posterior prediction interval for posterior predicted samples. We have also investigated the possibility of predicting later outcomes by using the observations collected at early phase of the study. We were successful in combining ML techniques, statistical methodologies and scientific computing tools such as bootstrap sampling, cubic spline interpolation, Bayesian inference and sparse identification of nonlinear dynamics (SINDy) to discover the dynamics behind the efficacy outcome as well as in quantifying the uncertainty of the parameters of the governing equation in the form of 95 \% HDI intervals. We compared the resulting model with the existed disease progression model described by the Weibull function. Based on the mean squared error (MSE) criterion between our ODE approximated values and population means of respective datasets, we achieved the least possible MSE of 0.133,0.089,0.213 and 0.057. After comparing these MSE values with the MSE values obtained after using Weibull function, for the third dataset and pooled dataset, our ODE model performed better in reducing error than the Weibull baseline model by 7.5\% and 8.1\%, respectively. Whereas for the first and second datasets, the Weibull model performed better in reducing errors by 1.5\% and 1.2\%, respectively. Comparing the overall performance in terms of MSE, our proposed model approximates the population means better in all the cases except for the first and second datasets, assuming the latter case's error margin is very small. Also, in terms of interpretation, our dynamical system model contains the mechanistic elements that can explain the decay/acceleration rate of the efficacy endpoint, which is missing in the Weibull model. However, our approach had a limitation in predicting final outcomes using a model derived from  24, 36, 48 weeks observations with good accuracy where as on the contrast, the Weibull model do not possess the predicting capability. However, the extrapolated trend based on 60 weeks of data was found to be close to population mean and the ODE model built on 72 weeks of data. Finally we highlight potential questions for the future work.
20

PERFORMANCE MACRO-MODELING TECHNIQUES FOR FAST ANALOG CIRCUIT SYNTHESIS

WOLFE, GLENN A. January 2004 (has links)
No description available.

Page generated in 0.063 seconds