• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Joint estimation in optical marker-based motion capture

Hang, Jianwei January 2018 (has links)
This thesis is concerned with the solutions to several issues, including the problems of joint localisation, motion de-noising/smoothing, and soft tissue artefacts correction, in skeletal motion reconstruction for motion analysis, using marker-based optical motion capture technologies. We propose a very efficient joint localisation method, which only needs to optimise over three parameters, regardless of the total numbers of markers and frames. A framework powered by this joint localisation solution is also developed, which can automatically find all the joints in an articulated body structure, and significantly reduce the total number of markers needed in a typical motion capture session, by implementing a solvability propagation process. This framework is also configured to operate in a hybrid scheme, which can automatically switch between the primary joint estimator and a slower solution having fewer conditions regarding the required number of markers on a given body segment. This makes the framework workable even for extreme scenarios in which there are fewer than three markers on any body segment. A non-linear optimisation method for 3D trajectory smoothing is also proposed for de-noising the estimated joint paths. By immobilising a series of characteristic points in the trajectory, this method is able to effectively preserve detailed information for vigorous motion sequences. Various other smoothing techniques in the literature are also discussed and compared, concluding that a size-3 weighted average filter implemented in an automatic manner is a good real-time solution for low intensity activities. The effects of skin deformation on marker position data, known as soft tissue artefacts, are learned via a behavioural study on the human upper-body, with specific emphasis on combined limb actions. Based on the experimental findings, mathematical models are proposed to characterise the development of different types of artefacts, including translational, rotational, and transverse. We also theoretically demonstrate the feasibility of using a Kalman filter to correct the soft tissue artefacts, using the mathematical models.
2

Assessing the Risk of Credit Guaranteed Loans to SMEs¡GBased on the Probability of Default and Recovery Rate Calculated by a Joint Parameters Estimation Approach

Lai, Kuang-erh 18 January 2010 (has links)
In almost all nations, credit guarantee is an important system that the government relies on to help small and medium enterprises (SMEs) obtain finance and provide guidance to them. In Taiwan, Small and Medium Enterprise Credit Guarantee Fund (SMEG) is an institution mandated by the government to assist SMEs to obtain necessary funds from financial institutions. Although SMEG is a non-profit organization, its financial status still affects its sustainability. Therefore, this paper modifies the model presented by Merrick (2001) and uses data of loans submitted by a domestic bank to SMEG for credit guarantee to estimate probability of default and recovery rate of credit guaranteed loans. As this model quantifies risk of credit guarantee, it can help SMEG calculate the necessary reserve for prepayment in subrogation. In this increasingly complicated financial environment, quality of risk control determines the prosperity or survival of an organization. The proposed model is a feasible risk evaluation model that credit guarantee institutions can utilize to effectively improve their quality of risk control.
3

Essays on Artefactual and Virtual Field Experiments in Choice Under Uncertainty

Tsang, Ming 01 December 2016 (has links)
In the area of transportation policy, congestion pricing has been used to alleviate traffic congestion in metropolitan areas. The focus of Chapter 1 is to examine drivers’ perceived risk of traffic delay as one determinant of reactions to congestion pricing. The experiment reported in this essay recruits commuters from the Atlanta and Orlando metropolitan areas to participate in a naturalistic experiment where they are asked to make repeated route decisions in a driving simulator. Chapter 1 examines belief formation and adjustments under an endogenous information environment where information about a route can be obtained only conditional on taking the route. If the subjects arrive to the destination late, i.e. beyond an assigned time threshold, they are faced with a discrete (flat) penalty. In contrast, Chapter 2 examines subjective beliefs in a setting where the penalty for a late arrival is continuous, such that a longer delay incurs additional penalty on the driver. The primary research question is: does belief formation differ when the late penalty is induced as a continuous amount compared to when it is induced as a discrete amount? In particular, will we observe a difference in learning across the range of congestion probabilities under different penalty settings? In the continuous penalty setting, we do not observe a difference in learning across the range of congestion probabilities. In contrast, in the discrete penalty setting we observe significant belief adjustments in the lowest congestion risk scenario. In Chapter 3 the “source method” is used to examine how uncertainty aversion differs across events that have the same underlying objective probabilities but are presented under varying degrees of uncertainty. Subjects are presented with three lottery tasks that rank in order of increasing uncertainty. Given the choices observed in each task a source function is estimated jointly with risk attitudes under different probability weighting specifications of the source function. Results from the Prelec probability weighting suggest that, as the degree of uncertainty increases, subjects display increased pessimism; in contrast, the Tversky-Kahneman (1992) and the Power probability weightings detect no such difference. Thus, the conclusion regarding uncertainty aversion are contingent on which probability weighting specification is assumed for the source function.
4

Filtro de Kalman Ensemble: uma análise da estimação conjunta dos estados e dos parâmetros / Ensemble Kalman filter: an analysis of the joint estimation of states and parameters

Silva, Rafael Oliveira 08 April 2019 (has links)
O Filtro de Kalman Ensemble (EnKF) é um algoritmo de Monte Carlo sequencial para inferência em modelos de espaço de estados lineares e não lineares. Este filtro combinado com alguns outros métodos propaga a distribuição a posteriori conjunta dos estados e dos parâmetros ao longo do tempo. Existem poucos trabalhos que consideram o problema da estimação simultânea dos estados e parâmetros, e os métodos existentes possuem limitações. Nesta dissertação analisamos a eficiência desses métodos por meio de estudos de simulação em modelos de espaço de estados lineares e não lineares. O problema de estimação não linear aqui tratado refere-se ao modelo de produção excedente logístico, para o qual o EnKF pode ser considerado uma possível alternativa aos algoritmos MCMC. Os resultados da simulação revelam que a acurácia das estimativas aumenta quando a série temporal cresce, mas alguns parâmetros apresentam problemas na estimação. / The Ensemble Kalman Filter (EnKF) is a sequential Monte Carlo algorithm for inference in linear and nonlinear state-space models. This filter combined with some other methods propagates the joint posterior distribution of states and parameters over time. There are fewer papers that consider the problem of simultaneous state-parameter estimation and existing methods have limitations. The purpose of this dissertation is to analyze the efficiency of these methods by means of simulation studies in linear and nonlinear state-space models. The nonlinear estimation problem addressed here refers to the logistic surplus-production model, for which the EnKF can be considered as a possible alternative to MCMC algorithms. The simulation results reveal that the accuracy of the estimates increases when the time series grows, but some parameters present problems in the estimation.
5

Joint Gaussian Graphical Model for multi-class and multi-level data

Shan, Liang 01 July 2016 (has links)
Gaussian graphical model has been a popular tool to investigate conditional dependency between random variables by estimating sparse precision matrices. The estimated precision matrices could be mapped into networks for visualization. For related but different classes, jointly estimating networks by taking advantage of common structure across classes can help us better estimate conditional dependencies among variables. Furthermore, there may exist multilevel structure among variables; some variables are considered as higher level variables and others are nested in these higher level variables, which are called lower level variables. In this dissertation, we made several contributions to the area of joint estimation of Gaussian graphical models across heterogeneous classes: the first is to propose a joint estimation method for estimating Gaussian graphical models across unbalanced multi-classes, whereas the second considers multilevel variable information during the joint estimation procedure and simultaneously estimates higher level network and lower level network. For the first project, we consider the problem of jointly estimating Gaussian graphical models across unbalanced multi-class. Most existing methods require equal or similar sample size among classes. However, many real applications do not have similar sample sizes. Hence, in this dissertation, we propose the joint adaptive graphical lasso, a weighted L1 penalized approach, for unbalanced multi-class problems. Our joint adaptive graphical lasso approach combines information across classes so that their common characteristics can be shared during the estimation process. We also introduce regularization into the adaptive term so that the unbalancedness of data is taken into account. Simulation studies show that our approach performs better than existing methods in terms of false positive rate, accuracy, Mathews correlation coefficient, and false discovery rate. We demonstrate the advantage of our approach using liver cancer data set. For the second one, we propose a method to jointly estimate the multilevel Gaussian graphical models across multiple classes. Currently, methods are still limited to investigate a single level conditional dependency structure when there exists the multilevel structure among variables. Due to the fact that higher level variables may work together to accomplish certain tasks, simultaneously exploring conditional dependency structures among higher level variables and among lower level variables are of our main interest. Given multilevel data from heterogeneous classes, our method assures that common structures in terms of the multilevel conditional dependency are shared during the estimation procedure, yet unique structures for each class are retained as well. Our proposed approach is achieved by first introducing a higher level variable factor within a class, and then common factors across classes. The performance of our approach is evaluated on several simulated networks. We also demonstrate the advantage of our approach using breast cancer patient data. / Ph. D.
6

Estimation aveugle de chaînes de Markov cachées simples et doubles : Application au décodage de codes graphiques / Blind estimation of hidden and double Markov chain : Application to barcode decoding

Dridi, Noura 25 June 2012 (has links)
Depuis leur création, les codes graphiques constituent un outil d'identification automatique largement exploité en industrie. Cependant, les performances de lecture sont limitées par un flou optique et un flou de mouvement. L'objectif de la thèse est l'optimisation de lecture des codes 1D et 2D en exploitant des modèles de Markov cachés simples et doubles, et des méthodes d'estimation aveugles. En premier lieu, le système de lecture de codes graphiques est modélisé par une chaîne de Markov cachée, et des nouveaux algorithmes pour l'estimation du canal et la détection des symboles sont développés. Ils tiennent compte de la non stationnarité de la chaîne de Markov. De plus une méthode d'estimation de la taille du flou et de sa forme est proposée. La méthode utilise des critères de sélection permettant de choisir le modèle de dégradation le plus adéquat. Enfin nous traitons le problème de complexité qui est particulièrement important dans le cas d'un canal à mémoire longue. La solution proposée consiste à modéliser le canal à mémoire longue par une chaîne de Markov double. Sur la base de ce modèle, des algorithmes offrant un rapport optimisé performance-complexité sont présentés / Since its birth, the technology of barcode is well investigated for automatic identification. When reading, a barcode can be degraded by a blur , caused by a bad focalisation and/ or a camera movement. The goal of this thesis is the optimisation of the receiver of 1D and 2D barcode from hidden and double Markov model and blind statistical estimation approaches. The first phase of our work consists of modelling the original image and the observed one using Hidden Markov model. Then, new algorithms for joint blur estimation and symbol detection are proposed, which take into account the non-stationarity of the hidden Markov process. Moreover, a method to select the most relevant model of the blur is proposed, based on model selection criterion. The method is also used to estimate the blur length. Finally, a new algorithm based on the double Markov chain is proposed to deal with digital communication through a long memory channel. Estimation of such channel is not possible using the classical detection algorithms based on the maximum likelihood due to the prohibitive complexity. New algorithm giving good trade off between complexity and performance is provided
7

Méthodes de traitement numérique du signal pour l'annulation d'auto-interférences dans un terminal mobile / Digital processing for auto-interference cancellation in mobile architecture

Gerzaguet, Robin 26 March 2015 (has links)
Les émetteurs-récepteurs actuels tendent à devenir multi-standards c’est-àdireque plusieurs standards de communication peuvent cohabiter sur la même puce. Lespuces sont donc amenées à traiter des signaux de formes très différentes, et les composantsanalogiques subissent des contraintes de conception de plus en plus fortes associées au supportdes différentes normes. Les auto-interférences, c’est à dire les interférences généréespar le système lui-même, sont donc de plus en plus présentes, et de plus en plus problématiquesdans les architectures actuelles. Ces travaux s’inscrivent dans le paradigmede la « radio sale » qui consiste à accepter une pollution partielle du signal d’intérêtet à réaliser, par l’intermédiaire d’algorithmes, une atténuation de l’impact de ces pollutionsauto-générées. Dans ce manuscrit, on s’intéresse à différentes auto-interférences(phénomène de "spurs", de "Tx leakage", ...) dont on étudie les modèles numériques etpour lesquelles nous proposons des stratégies de compensation. Les algorithmes proposéssont des algorithmes de traitement du signal adaptatif qui peuvent être vus comme des« algorithmes de soustraction de bruit » basés sur des références plus ou moins précises.Nous dérivons analytiquement les performances transitionnelles et asymptotiques théoriquesdes algorithmes proposés. On se propose également d’ajouter à nos systèmes unesur-couche originale qui permet d’accélérer la convergence, tout en maintenant des performancesasymptotiques prédictibles et paramétrables. Nous validons enfin notre approchesur une puce dédiée aux communications cellulaires ainsi que sur une plateforme de radiologicielle. / Radio frequency transceivers are now massively multi-standards, which meansthat several communication standards can cohabit in the same environment. As a consequence,analog components have to face critical design constraints to match the differentstandards requirements and self-interferences that are directly introduced by the architectureitself are more and more present and detrimental. This work exploits the dirty RFparadigm : we accept the signal to be polluted by self-interferences and we develop digitalsignal processing algorithms to mitigate those aforementioned pollutions and improve signalquality. We study here different self-interferences and propose baseband models anddigital adaptive algorithms for which we derive closed form formulae of both transientand asymptotic performance. We also propose an original adaptive step-size overlay toimprove transient performance of our method. We finally validate our approach on a systemon chip dedicated to cellular communications and on a software defined radio.
8

Méthodes de traitement numérique du signal pour l'annulation d'auto-interférences dans un terminal mobile / Digital processing for auto-interference cancellation in mobile architecture

Gerzaguet, Robin 26 March 2015 (has links)
Les émetteurs-récepteurs actuels tendent à devenir multi-standards c’est-àdireque plusieurs standards de communication peuvent cohabiter sur la même puce. Lespuces sont donc amenées à traiter des signaux de formes très différentes, et les composantsanalogiques subissent des contraintes de conception de plus en plus fortes associées au supportdes différentes normes. Les auto-interférences, c’est à dire les interférences généréespar le système lui-même, sont donc de plus en plus présentes, et de plus en plus problématiquesdans les architectures actuelles. Ces travaux s’inscrivent dans le paradigmede la « radio sale » qui consiste à accepter une pollution partielle du signal d’intérêtet à réaliser, par l’intermédiaire d’algorithmes, une atténuation de l’impact de ces pollutionsauto-générées. Dans ce manuscrit, on s’intéresse à différentes auto-interférences(phénomène de "spurs", de "Tx leakage", ...) dont on étudie les modèles numériques etpour lesquelles nous proposons des stratégies de compensation. Les algorithmes proposéssont des algorithmes de traitement du signal adaptatif qui peuvent être vus comme des« algorithmes de soustraction de bruit » basés sur des références plus ou moins précises.Nous dérivons analytiquement les performances transitionnelles et asymptotiques théoriquesdes algorithmes proposés. On se propose également d’ajouter à nos systèmes unesur-couche originale qui permet d’accélérer la convergence, tout en maintenant des performancesasymptotiques prédictibles et paramétrables. Nous validons enfin notre approchesur une puce dédiée aux communications cellulaires ainsi que sur une plateforme de radiologicielle. / Radio frequency transceivers are now massively multi-standards, which meansthat several communication standards can cohabit in the same environment. As a consequence,analog components have to face critical design constraints to match the differentstandards requirements and self-interferences that are directly introduced by the architectureitself are more and more present and detrimental. This work exploits the dirty RFparadigm : we accept the signal to be polluted by self-interferences and we develop digitalsignal processing algorithms to mitigate those aforementioned pollutions and improve signalquality. We study here different self-interferences and propose baseband models anddigital adaptive algorithms for which we derive closed form formulae of both transientand asymptotic performance. We also propose an original adaptive step-size overlay toimprove transient performance of our method. We finally validate our approach on a systemon chip dedicated to cellular communications and on a software defined radio.

Page generated in 0.1315 seconds