• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 517
  • 170
  • 72
  • 52
  • 41
  • 39
  • 21
  • 16
  • 12
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 1107
  • 1107
  • 248
  • 235
  • 199
  • 180
  • 127
  • 122
  • 122
  • 118
  • 112
  • 109
  • 95
  • 92
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Quantization for Low Delay and Packet Loss

Subasingha, Subasingha Shaminda 22 April 2010 (has links)
Quantization of multimodal vector data in Realtime Interactive Communication Networks (RICNs) associated with application areas such as speech, video, audio, and haptic signals introduces a set of unique challenges. In particular, achieving the necessary distortion performance with minimum rate while maintaining low end-to-end delay and handling packet losses is of paramount importance. This dissertation presents vector quantization schemes which aim to satisfy these important requirements based on two source coding paradigms; 1) Predictive coding 2) Distributed source coding. Gaussian Mixture Models (GMMs) can be used to model any probability density function (pdf) with an arbitrarily small error given a sufficient number of mixture components. Hence, Gaussian Mixture Models can be effectively used to model the underlying pdfs of a variety of data in RICN applications. In this dissertation, first we present Gaussian Mixture Models Kalman predictive coding, which uses transform domain predictive GMM quantization techniques with Kalman filtering principles. In particular, we show how suitable modeling of quantization noise leads to a signal-adaptive GMM Kalman predictive coder that provides improved coding performance. Moreover, we demonstrate how running a GMM Kalman predictive coder to convergence can be used to design a stationary GMM Kalman predictive coding system which provides improved coding of GMM vector data but now with only a modest increase in run-time complexity over the baseline. Next, we address the issues of packet loss in the networks using GMM Kalman predictive coding principles. In particular, we show how an initial GMM Kalman predictive coder can be utilized to obtain a robust GMM predictive coder specifically designed to operate in packet loss. We demonstrate how one can define sets of encoding and decoding modes, and design special Kalman encoding and decoding gains for each mode. With this framework, GMM predictive coding design can be viewed as determining the special Kalman gains that minimize the expected mean squared error at the decoder in packet loss conditions. Finally, we present analytical techniques for modeling, analyzing and designing Wyner-Ziv(WZ) quantizers for Distributed Source Coding for jointly Gaussian vector data with imperfect side information. In most of the DSC implementations, the side information is not explicitly available in the decoder. Thus, almost all of the practical implementations obtain the side information from the previously decoded frames. Due to model imperfections, packet losses, previous decoding errors, and quantization noise, the available side information is usually noisy. However, the design of Wyner-Ziv quantizers for imperfect side information has not been widely addressed in the DSC literature. The analytical techniques presented in this dissertation explicitly assume the existence of imperfect side information in the decoder. Furthermore, we demonstrate how the design problem for vector data can be decomposed into independent scalar design subproblems. Then, we present the analytical techniques to compute the optimum step size and bit allocation for each scalar quantizer such that the decoder's expected vector Mean Squared Error(MSE) is minimized. The simulation results verify that the predicted MSE based on the presented analytical techniques closely follow the simulation results.
342

Explicit Lp-norm estimates of infinitely divisible random vectors in Hilbert spaces with applications

Turner, Matthew D 01 May 2011 (has links)
I give explicit estimates of the Lp-norm of a mean zero infinitely divisible random vector taking values in a Hilbert space in terms of a certain mixture of the L2- and Lp-norms of the Levy measure. Using decoupling inequalities, the stochastic integral driven by an infinitely divisible random measure is defined. As a first application utilizing the Lp-norm estimates, computation of Ito Isomorphisms for different types of stochastic integrals are given. As a second application, I consider the discrete time signal-observation model in the presence of an alpha-stable noise environment. Formulation is given to compute the optimal linear estimate of the system state.
343

Identifikation von Waermeaustauschparametern Thermischer Netzwerke durch transient gemessene Knotentemperaturen bei minimierter Messzeit

Erfurt 04 December 2001 (has links) (PDF)
No description available.
344

GPS receiver self survey and attitude determination using pseudolite signals

Park, Keun Joo 15 November 2004 (has links)
This dissertation explores both the estimation of various parameters from a multiple antenna GPS receiver, which is used as an attitude sensor, and attitude determination using GPS-like Pseudolite signals. To use a multiple antenna GPS receiver as an attitude sensor, parameters such as baselines, integer ambiguities, line biases, and attitude, should be resolved beforehand. Also, due to a cycle slip problem a subsystem to correct this problem should be implemented. All of these tasks are called a self survey. A new algorithm to estimate these parameters from a GPS receiver is developed usingnonlinear batch filteringmethods.For convergence issues, both the nolinear least squares (NLS) and Levenberg-Marquardt (LM) methods are applied in the estimation.Acomparison ofthe NLSand LMmethods shows that the convergence of the LM method for the large initial errors is more robust than that of the NLS. In the proximity of the International Space Station (ISS), Pseudolite signals replace the GPSsignals since almostallsignals are blocked.Since the Pseudolite signals have spherical wavefronts, a new observation model should be applied. A nonlinear predictive filter, an extended Kalman filter (EKF), and an unscented filter (UF) are developed and compared using Pseudolite signals. A nonlinear predictive filter can provide a deterministic solution; however, it cannot be used for the moving case. Instead, the EKF or the UF can be used with the angular rate measurements. A comparison of EKF and UF shows that the convergence of the UF for the large initial errors is more robust than that of the EKF. Also, an alternative global navigation constellation is presented by using the Flower Constellation (FC) scheme. A comparison of FC global navigation constellation and other GPS constellations, U.S. GPS, Galileo, and GLONASS, shows that position and attitude errors of the FC constellation are smaller that those of the others.
345

SVI estimation of the implied volatility by Kalman filter.

Burnos, Sergey, Ngow, ChaSing January 2010 (has links)
To understand and model the dynamics of the implied volatility smile is essential for trading, pricing and risk management portfolio. We suggest a  linear Kalman filter for updating of the Stochastic Volatility Inspired (SVI) model of the volatility. From a risk management perspective we generate the 1-day ahead forecast of profit and loss (P\&L) of option portfolios. We compare the estimation of the implied volatility using the SVI model with the cubic polynomial model. We find that the SVI Kalman filter has outperformed the  others.
346

Applications of Cost Function-Based Particle Filters for Maneuvering Target Tracking

Wang, Sung-chieh 23 August 2007 (has links)
For the environment of target tracking with highly non-linear models and non-Gaussian noise, the tracking performance of the particle filter is better than extended Kalman filter; in addition, the design of particle filter is simpler, so it is quite suitable for the realistic environment. However, particle filter depends on the probability model of the noise. If the knowledge of the noise is incorrect, the tracking performance of the particle filter will degrade severely. To tackle the problem, cost function-based particle filters have been studied. Though suffering from minor degradation on the performance, the cost function-based particle filters do not need probability assumptions of the noises. The application of cost function-based particle filters will be more robust in any realistic environment. Cost function-based particle filters will enable maneuvering multiple target tracking to be suitable for any environment because it does not depend on the noise model. The difficulty lies in the link between the estimator and data association. The likelihood function are generally obtained from the algorithm of the data association; while cost functions are used in the cost function-based particle filter for moving the particles and update the corresponding weights without probability assumptions on the noises. The thesis is focused on the combination of data association and cost function-based particle filter, in order to make the algorithm of multiple target tracking more robust in noisy environments.
347

Designing An Interplanetary Autonomous Spacecraft Navigation System Using Visible Planets

Karimi, Reza 2012 May 1900 (has links)
A perfect duality exists between the problem of space-based orbit determination from line-of-sight measurements and the problem of designing an interplanetary autonomous navigation system. Mathematically, these two problems are equivalent. Any method solving the first problem can be used to solve the second one and, vice versa. While the first problem estimates the observed unknown object orbit using the known observer orbit, the second problem does exactly the opposite (e.g. the spacecraft observes a known visible planet). However, in an interplanetary navigation problem, in addition to the measurement noise, the following "perturbations" must be considered: 1) light-time effect due to the finite speed of light and large distances between the observer and planets, and 2) light aberration including special relativistic effect. These two effects require corrections of the initial orbit estimation problems. Because of the duality problem of space-based orbit determination, several new techniques of angles-only Initial Orbit Determination (IOD) are here developed which are capable of using multiple observations and provide higher orbit estimation accuracy and also they are not suffering from some of the limitations associated with the classical and some newly developed methods of initial orbit determination. Using multiple observations make these techniques suitable for the coplanar orbit determination problems which are the case for the spacecraft navigation using visible planets as the solar system planets are all almost coplanar. Four new IOD techniques were developed and Laplace method was modified. For the autonomous navigation purpose, Extended Kalman Filter (EKF) is employed. The output of the IOD algorithm is then used as the initial condition to extended Kalman filter. The two "perturbations" caused by light-time effect and stellar aberration including special relativistic effect also need to be taken into consideration and corrections should be implemented into the extended Kalman filter scheme for the autonomous spacecraft navigation problem.
348

Automated Rehabilitation Exercise Motion Tracking

Lin, Jonathan Feng-Shun January 2012 (has links)
Current physiotherapy practice relies on visual observation of the patient for diagnosis and assessment. The assessment process can potentially be automated to improve accuracy and reliability. This thesis proposes a method to recover patient joint angles and automatically extract movement profiles utilizing small and lightweight body-worn sensors. Joint angles are estimated from sensor measurements via the extended Kalman filter (EKF). Constant-acceleration kinematics is employed as the state evolution model. The forward kinematics of the body is utilized as the measurement model. The state and measurement models are used to estimate the position, velocity and acceleration of each joint, updated based on the sensor inputs from inertial measurement units (IMUs). Additional joint limit constraints are imposed to reduce drift, and an automated approach is developed for estimating and adapting the process noise during on-line estimation. Once joint angles are determined, the exercise data is segmented to identify each of the repetitions. This process of identifying when a particular repetition begins and ends allows the physiotherapist to obtain useful metrics such as the number of repetitions performed, or the time required to complete each repetition. A feature-guided hidden Markov model (HMM) based algorithm is developed for performing the segmentation. In a sequence of unlabelled data, motion segment candidates are found by scanning the data for velocity-based features, such as velocity peaks and zero crossings, which match the pre-determined motion templates. These segment potentials are passed into the HMM for template matching. This two-tier approach combines the speed of a velocity feature based approach, which only requires the data to be differentiated, with the accuracy of the more computationally-heavy HMM, allowing for fast and accurate segmentation. The proposed algorithms were verified experimentally on a dataset consisting of 20 healthy subjects performing rehabilitation exercises. The movement data was collected by IMUs strapped onto the hip, thigh and calf. The joint angle estimation system achieves an overall average RMS error of 4.27 cm, when compared against motion capture data. The segmentation algorithm reports 78% accuracy when the template training data comes from the same participant, and 74% for a generic template.
349

Forecast Comparison of Models Based on SARIMA and the Kalman Filter for Inflation

Nikolaisen Sävås, Fredrik January 2013 (has links)
Inflation is one of the most important macroeconomic variables. It is vital that policy makers receive accurate forecasts of inflation so that they can adjust their monetary policy to attain stability in the economy which has been shown to lead to economic growth. The purpose of this study is to model inflation and evaluate if applying the Kalman filter to SARIMA models lead to higher forecast accuracy compared to just using the SARIMA model. The Box-Jenkins approach to SARIMA modelling is used to obtain well-fitted SARIMA models and then to use a subset of observations to estimate a SARIMA model on which the Kalman filter is applied for the rest of the observations. These models are identified and then estimated with the use of monthly inflation for Luxembourg, Mexico, Portugal and Switzerland with the target to use them for forecasting. The accuracy of the forecasts are then evaluated with the error measures mean squared error (MSE), mean average deviation (MAD), mean average percentage error (MAPE) and the statistic Theil's U. For all countries these measures indicate that the Kalman filtered model yield more accurate forecasts. The significance of these differences are then evaluated with the Diebold-Mariano test for which only the difference in forecast accuracy of Swiss inflation is proven significant. Thus, applying the Kalman filter to SARIMA models with the target to obtain forecasts of monthly inflation seem to lead to higher or at least not lower predictive accuracy for the monthly inflation of these countries.
350

A Least-Cost Strategy for Evaluating a Brownfields Redevelopment Project Subject to Indoor Air Exposure Regulations

Wang, Xiaomin 20 August 2012 (has links)
Over the course of the past several decades the benefits of redeveloping brownfields have been widely recognized. Actions have been taken to foster sustainable redevelopment of brownfields by government, policy makers and stakeholders across the world. However, redevelopments encounter great challenges and risks related to environmental and non-environmental issues. In this work, we intend to build a comprehensive and practical framework to evaluate the hydrogeological and financial risks involved during redevelopment and to ensure developers reserve sufficient capital to cover unexpected future costs within the guarantee period. Punitive damages, which contribute to these costs, are in this thesis solely associated with the cost of repossessing a house within a development should the indoor air concentration of TCE exceed the regulatory limit at a later time. The uncertainties associated with brownfield remediation have been among the barriers to brownfield redevelopment. This is mainly caused by the lack of knowledge about a site’s environmental condition. In order to alleviate uncertainties and to better understand the contaminant transport process in the subsurface, numerical simulations have been conducted to investigate the role of controlling parameters in determining the fate and transport of volatile organic compounds originating from a NAPL source zone located below the water table in the subsurface. In the first part of this thesis, the numerical model CompFlow Bio is used on a hypothesized three-dimensional problem geometry where multiple residential dwellings are built. The simulations indicate that uncertainty in the simulated indoor air concentration is sensitive to heterogeneity in the permeability structure of a stratigraphically continuous aquifer with uncertainty defined as the probability of exceeding a regulatory limit. Houses which are laterally offset from the groundwater plume are less affected by vapour intrusion due to limited transverse horizontal flux of TCE within the groundwater plume in agreement with the ASTM (2008) guidance. Within this uncertainty framework, we show that the Johnson and Ettinger (1991) model generates overly-conservative results and contributes to the exclusion zone being much further away from the groundwater plume relative to either CompFlow Bio or ASTM (2008). The probability of failure (or the probability of exceedence of the regulatory limit) is defined and calculated for further study. Due to uncertainties resulting from parameter estimation and model prediction, a methodology is introduced to incorporate field measurements into the initial estimates from the numerical model in order to improve prediction accuracy. The principle idea of this methodology is to combine the geostatistical tool kriging with the statistical data assimilation method Kalman filter to evaluate the worth and effectiveness of data in a quantitative way in order to select an optimal sampling scenario. This methodology is also used to infer whether one of the houses located adjacent to affected houses has indoor air problems based on the measurements subject to the observation that the affected house is monitored and has problems and developers have liability if a problem occurs. In this part of the study, different sampling scenarios are set up in terms of permeability (1 – 80 boreholes) and soil gas concentration (2, 4 and 7 samples) and three metrics are defined and computed as a criterion for comparison. Financing brownfield redevelopment is often viewed as a major barrier to the development process mainly due to risks and liabilities associated with brownfields. The common way of managing the risk is to transfer it to insurers by purchasing insurance coverage. This work provides two different strategies to price the risk, which is equivalent to an insurance premium. It is intended to give an instructive insight into project planning and feasibility studies during the decision-making process of a brownfield project. The two strategies of risk capital valuation are an actuarial premium calculation principle and a martingale premium calculation principle accounting for the hydrogeological and financial uncertainties faced in a project. The data used for valuation are the posterior estimates of data assimilation obtained from the results of different sampling scenarios. The cost-benefit-risk analysis is employed as a basis to construct the objective function in order to find the least cost among sampling scenarios for the project. As a result, it shows that drilling seven boreholes to extract permeability data and taking soil gas samplings in four locations or seven locations alternatively give the minimum total cost. Sensitivity analysis of some influential parameters (the safety loading factors and the possible methods to calculate the probability of failure) is performed to determine their roles of importance in the risk capital valuation. This framework can be applied to provide guidance for other risk-based environmental projects.

Page generated in 0.0527 seconds