• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3717
  • 915
  • 683
  • 427
  • 160
  • 95
  • 61
  • 57
  • 45
  • 38
  • 36
  • 35
  • 35
  • 34
  • 27
  • Tagged with
  • 7567
  • 1139
  • 886
  • 809
  • 729
  • 726
  • 711
  • 572
  • 536
  • 534
  • 526
  • 523
  • 500
  • 483
  • 476
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Design of Robust Adaptive Sliding Mode Controllers for Nonlinear Mismatched Systems

Lin, Kuo-Ching 23 June 2000 (has links)
Abstract A simple design methodology of robust adaptive sliding m de utput tracking controllers for a class of MIMO nonlinear mismatched perturbed systems is presented in this thesis.First,the derivatives of tracking error
572

CockTail Search (CTS) for Video Motion Estimation

Wei, Sheng-Li 29 June 2001 (has links)
The performance and speed of the interframe motion estimation method for sequencial frame sequence compression are the important issues especially in networking application such as video conference and video on demand. In this paper, we proposed a new fast search algorithm for motion estimation on block matching technique called the cocktail search algorithm (CTS). This new algorithm takes advantages of prior search algorithms proposed in the literature and improves at our observations. The experiment results show that the proposed CTS algorithm can provide the better performance and require less computational costs than others. In other words, the CTS can obtain the accurate motion vector efficiently and fast. The fruitful results is achieved by not only holding the original benefit but also constructively improving the existing drawbacks.
573

The motives and information content of stock repurchases

Liu, Yi-Hsiang 24 June 2002 (has links)
There are 506 announcements of stock repurchases from 1999.8.9 to the end of 2001 after Taiwan adopting the law of treasury stock. It¡¦s obviously that companies issued in the stock market need the law because the percent of applying is up to 37.23%. We study the announcements during 1999.8.9 to 2001.12.31 and try to find out the motives of stock repurchases. For understanding the effect of market prediction, we try to set up a prediction model and separate the result of market prediction to right and wrong. Regarding the factor of affecting the cumulative abnormal return after announcement, we argue that it¡¦s quite similar with cash dividend announcement as companies signal the good news of becoming better in the future. It infers that the effect of announcement relates to former accounting information. The result show as following: (1) the motives to stock repurchases are consistent with optimal leverage ratio hypothesis, dividend or tax hypothesis. The companies would intend to stock repurchase when the board of directors had higher collateral ratio or the enterprise ever used the subsidiary company to repurchase the stock. (2) we can¡¦t prove that the unexpected announcement has higher abnormal return than the expected one. (3) we also can¡¦t prove that the former accounting information affect the abnormal return, but we can see the it positively relates to the free cash flow, undervaluation, and the degree of information asymmetry.
574

Modified Motion Estimating Methods for Increasing Video Compression Rate

Wang, Sheng-Hung 28 June 2002 (has links)
In recent years, the internet has been in widespread use and the number of internet subscribers increased quickly. Hence a lot of applications on the network have been developed, multimedia programs especially. Whereas the original video content always takes up considerable storage and transmission time which doesn¡¦t suit for network application, many video compression standards have been drawn up in the literature Due to the temporal redundancy of the video sequences, motion estimation / compensation has been widely used in many interframe video coding protocols to reduce the required bit rates for transmission and storage of video signals by eliminating it, such as the MPEG-1, MPEG-2, H.261 and H.263. The performance and speed of the interframe motion estimation method for video sequence compression are the important issues especially in networking application such as video conference and video on demand. Today all motion estimating method find out the estimating point which has minimal Mean Square Error, and motion compensation aim at estimating error to do JPEG. compression. As everyone knows, JPEG employs DCT to eliminate the correlation of spatial domain. So the best motion estimating point is the point which has the minimal compressed data size. In some alalyses show that over 50% best estimating point do not have the minimal compressed data size. So the factor which effects the compressed data size is correlation coefficient and not MSE. Hence, we try to define a new criterion for motion estimation which can get better motion compensation with less compressing bit rate. To reach this goal, we try to find out the correlation among the motion compensation as the new criterion for motion estimation.
575

Hierarchical SDD Metric and Multiresolution Motion Estimation

Hsu, Chin-Hsun 09 July 2002 (has links)
In this paper a novel Hierarchical Sum of Double Difference metric, HSDD, was introduced. It was shown, as opposed to conventional Sum of Absolute Difference (SAD) metric, how this embedded-coding aware metric can jointly constrain the motion vector searching in both temporal and spatial (quad-tree) directions under multiresolution motion estimation (MRME) framework. The temporal-spatial co-optimization concept from HSDD brings us the motion compensation pyramid with better shape. The reward is that fewer bits are spent later for describing the isolated zeros. The compression performance of HSDD easily exceeds the performance of its competitors, especially when high compression ratios are used.
576

A Study of Software Size Estimation using Function Point

Wang, Der-Rong 11 July 2003 (has links)
Software size estimation has been long a challenging task over a software development process. This paper presents an approach that uses the function point analysis to estimate program coding and testing effort in a MIS department, which maintains an ERP system with low employee transfer rate. The method first analyzes the historical data using regression analysis, and then builds a software estimation model with elaborated coefficients for related parameters. The estimation model is tested with the remaining set of historical data to evaluate its predict accuracy. It is shown that the size estimation model can be as accurate as about 90% correctness. Thus it is useful not only in company-wide information resource allocation, but also in performance evaluation of software engineers.
577

Analysis of beacon triangulation in random graphs

Kakarlapudi, Geetha 17 February 2005 (has links)
Our research focusses on the problem of finding nearby peers in the Internet. We focus on one particular approach, Beacon Triangulation that is widely used to solve the peer-finding problem. Beacon Triangulation is based on relative distances of nodes to some special nodes called beacons. The scheme gives an error when a new node that wishes to join the network has the same relative distance to two or more nodes. One of the reasons for the error is that two or more nodes have the same distance vectors. As a part of our research work, we derive the conditions to ensure the uniqueness of distance vectors in any network given the shortest path distribution of nodes in that network. We verify our analytical results for G(n, p) graphs and the Internet. We also derive other conditions under which the error in the Beacon Triangulation scheme reduces to zero. We compare the Beacon Triangulation scheme to another well-known distance estimation scheme known as Global Network Positioning (GNP).
578

Nonlinear bayesian filtering with applications to estimation and navigation

Lee, Deok-Jin 29 August 2005 (has links)
In principle, general approaches to optimal nonlinear filtering can be described in a unified way from the recursive Bayesian approach. The central idea to this recur- sive Bayesian estimation is to determine the probability density function of the state vector of the nonlinear systems conditioned on the available measurements. However, the optimal exact solution to this Bayesian filtering problem is intractable since it requires an infinite dimensional process. For practical nonlinear filtering applications approximate solutions are required. Recently efficient and accurate approximate non- linear filters as alternatives to the extended Kalman filter are proposed for recursive nonlinear estimation of the states and parameters of dynamical systems. First, as sampling-based nonlinear filters, the sigma point filters, the unscented Kalman fil- ter and the divided difference filter are investigated. Secondly, a direct numerical nonlinear filter is introduced where the state conditional probability density is calcu- lated by applying fast numerical solvers to the Fokker-Planck equation in continuous- discrete system models. As simulation-based nonlinear filters, a universally effective algorithm, called the sequential Monte Carlo filter, that recursively utilizes a set of weighted samples to approximate the distributions of the state variables or param- eters, is investigated for dealing with nonlinear and non-Gaussian systems. Recentparticle filtering algorithms, which are developed independently in various engineer- ing fields, are investigated in a unified way. Furthermore, a new type of particle filter is proposed by integrating the divided difference filter with a particle filtering framework, leading to the divided difference particle filter. Sub-optimality of the ap- proximate nonlinear filters due to unknown system uncertainties can be compensated by using an adaptive filtering method that estimates both the state and system error statistics. For accurate identification of the time-varying parameters of dynamic sys- tems, new adaptive nonlinear filters that integrate the presented nonlinear filtering algorithms with noise estimation algorithms are derived. For qualitative and quantitative performance analysis among the proposed non- linear filters, systematic methods for measuring the nonlinearities, biasness, and op- timality of the proposed nonlinear filters are introduced. The proposed nonlinear optimal and sub-optimal filtering algorithms with applications to spacecraft orbit es- timation and autonomous navigation are investigated. Simulation results indicate that the advantages of the proposed nonlinear filters make these attractive alterna- tives to the extended Kalman filter.
579

Bootstrapping in a high dimensional but very low sample size problem

Song, Juhee 16 August 2006 (has links)
High Dimension, Low Sample Size (HDLSS) problems have received much attention recently in many areas of science. Analysis of microarray experiments is one such area. Numerous studies are on-going to investigate the behavior of genes by measuring the abundance of mRNA (messenger RiboNucleic Acid), gene expression. HDLSS data investigated in this dissertation consist of a large number of data sets each of which has only a few observations. We assume a statistical model in which measurements from the same subject have the same expected value and variance. All subjects have the same distribution up to location and scale. Information from all subjects is shared in estimating this common distribution. Our interest is in testing the hypothesis that the mean of measurements from a given subject is 0. Commonly used tests of this hypothesis, the t-test, sign test and traditional bootstrapping, do not necessarily provide reliable results since there are only a few observations for each data set. We motivate a mixture model having C clusters and 3C parameters to overcome the small sample size problem. Standardized data are pooled after assigning each data set to one of the mixture components. To get reasonable initial parameter estimates when density estimation methods are applied, we apply clustering methods including agglomerative and K-means. Bayes Information Criterion (BIC) and a new criterion, WMCV (Weighted Mean of within Cluster Variance estimates), are used to choose an optimal number of clusters. Density estimation methods including a maximum likelihood unimodal density estimator and kernel density estimation are used to estimate the unknown density. Once the density is estimated, a bootstrapping algorithm that selects samples from the estimated density is used to approximate the distribution of test statistics. The t-statistic and an empirical likelihood ratio statistic are used, since their distributions are completely determined by the distribution common to all subject. A method to control the false discovery rate is used to perform simultaneous tests on all small data sets. Simulated data sets and a set of cDNA (complimentary DeoxyriboNucleic Acid) microarray experiment data are analyzed by the proposed methods.
580

Multi-area power system state estimation utilizing boundary measurements and phasor measurement units ( PMUs)

Freeman, Matthew A 30 October 2006 (has links)
The objective of this thesis is to prove the validity of a multi-area state estimator and investigate the advantages it provides over a serial state estimator. This is done utilizing the IEEE 118 Bus Test System as a sample system. This thesis investigates the benefits that stem from utilizing a multi-area state estimator instead of a serial state estimator. These benefits are largely in the form of increased accuracy and decreased processing time. First, the theory behind power system state estimation is explained for a simple serial estimator. Then the thesis shows how conventional measurements and newer, more accurate PMU measurements work within the framework of weighted least squares estimation. Next, the multi-area state estimator is examined closely and the additional measurements provided by PMUs are used to increase accuracy and computational efficiency. Finally, the multi-area state estimator is tested for accuracy, its ability to detect bad data, and computation time.

Page generated in 0.1041 seconds