• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 32
  • 11
  • 11
  • 10
  • 9
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Interactive Imaging via Hand Gesture Recognition.

Jia, Jia January 2009 (has links)
With the growth of computer power, Digital Image Processing plays a more and more important role in the modern world, including the field of industry, medical, communications, spaceflight technology etc. As a sub-field, Interactive Image Processing emphasizes particularly on the communications between machine and human. The basic flowchart is definition of object, analysis and training phase, recognition and feedback. Generally speaking, the core issue is how we define the interesting object and track them more accurately in order to complete the interaction process successfully. This thesis proposes a novel dynamic simulation scheme for interactive image processing. The work consists of two main parts: Hand Motion Detection and Hand Gesture recognition. Within a hand motion detection processing, movement of hand will be identified and extracted. In a specific detection period, the current image is compared with the previous image in order to generate the difference between them. If the generated difference exceeds predefined threshold alarm, a typical hand motion movement is detected. Furthermore, in some particular situations, changes of hand gesture are also desired to be detected and classified. This task requires features extraction and feature comparison among each type of gestures. The essentials of hand gesture are including some low level features such as color, shape etc. Another important feature is orientation histogram. Each type of hand gestures has its particular representation in the domain of orientation histogram. Because Gaussian Mixture Model has great advantages to represent the object with essential feature elements and the Expectation-Maximization is the efficient procedure to compute the maximum likelihood between testing images and predefined standard sample of each different gesture, the comparability between testing image and samples of each type of gestures will be estimated by Expectation-Maximization algorithm in Gaussian Mixture Model. The performance of this approach in experiments shows the proposed method works well and accurately.
12

Hawkes Process Models for Unsupervised Learning on Uncertain Event Data

Haghdan, Maysam January 2017 (has links)
No description available.
13

Longitudinal data analysis with covariates measurement error

Hoque, Md. Erfanul 05 January 2017 (has links)
Longitudinal data occur frequently in medical studies and covariates measured by error are typical features of such data. Generalized linear mixed models (GLMMs) are commonly used to analyse longitudinal data. It is typically assumed that the random effects covariance matrix is constant across the subject (and among subjects) in these models. In many situations, however, this correlation structure may differ among subjects and ignoring this heterogeneity can cause the biased estimates of model parameters. In this thesis, following Lee et al. (2012), we propose an approach to properly model the random effects covariance matrix based on covariates in the class of GLMMs where we also have covariates measured by error. The resulting parameters from this decomposition have a sensible interpretation and can easily be modelled without the concern of positive definiteness of the resulting estimator. The performance of the proposed approach is evaluated through simulation studies which show that the proposed method performs very well in terms biases and mean square errors as well as coverage rates. The proposed method is also analysed using a data from Manitoba Follow-up Study. / February 2017
14

Algorithmic Trading : Hidden Markov Models on Foreign Exchange Data

Idvall, Patrik, Jonsson, Conny January 2008 (has links)
In this master's thesis, hidden Markov models (HMM) are evaluated as a tool for forecasting movements in a currency cross. With an ever increasing electronic market, making way for more automated trading, or so called algorithmic trading, there is constantly a need for new trading strategies trying to find alpha, the excess return, in the market. HMMs are based on the well-known theories of Markov chains, but where the states are assumed hidden, governing some observable output. HMMs have mainly been used for speech recognition and communication systems, but have lately also been utilized on financial time series with encouraging results. Both discrete and continuous versions of the model will be tested, as well as single- and multivariate input data. In addition to the basic framework, two extensions are implemented in the belief that they will further improve the prediction capabilities of the HMM. The first is a Gaussian mixture model (GMM), where one for each state assign a set of single Gaussians that are weighted together to replicate the density function of the stochastic process. This opens up for modeling non-normal distributions, which is often assumed for foreign exchange data. The second is an exponentially weighted expectation maximization (EWEM) algorithm, which takes time attenuation in consideration when re-estimating the parameters of the model. This allows for keeping old trends in mind while more recent patterns at the same time are given more attention. Empirical results shows that the HMM using continuous emission probabilities can, for some model settings, generate acceptable returns with Sharpe ratios well over one, whilst the discrete in general performs poorly. The GMM therefore seems to be an highly needed complement to the HMM for functionality. The EWEM however does not improve results as one might have expected. Our general impression is that the predictor using HMMs that we have developed and tested is too unstable to be taken in as a trading tool on foreign exchange data, with too many factors influencing the results. More research and development is called for.
15

Algorithmic Trading : Hidden Markov Models on Foreign Exchange Data

Idvall, Patrik, Jonsson, Conny January 2008 (has links)
<p>In this master's thesis, hidden Markov models (HMM) are evaluated as a tool for forecasting movements in a currency cross. With an ever increasing electronic market, making way for more automated trading, or so called algorithmic trading, there is constantly a need for new trading strategies trying to find alpha, the excess return, in the market.</p><p>HMMs are based on the well-known theories of Markov chains, but where the states are assumed hidden, governing some observable output. HMMs have mainly been used for speech recognition and communication systems, but have lately also been utilized on financial time series with encouraging results. Both discrete and continuous versions of the model will be tested, as well as single- and multivariate input data.</p><p>In addition to the basic framework, two extensions are implemented in the belief that they will further improve the prediction capabilities of the HMM. The first is a Gaussian mixture model (GMM), where one for each state assign a set of single Gaussians that are weighted together to replicate the density function of the stochastic process. This opens up for modeling non-normal distributions, which is often assumed for foreign exchange data. The second is an exponentially weighted expectation maximization (EWEM) algorithm, which takes time attenuation in consideration when re-estimating the parameters of the model. This allows for keeping old trends in mind while more recent patterns at the same time are given more attention.</p><p>Empirical results shows that the HMM using continuous emission probabilities can, for some model settings, generate acceptable returns with Sharpe ratios well over one, whilst the discrete in general performs poorly. The GMM therefore seems to be an highly needed complement to the HMM for functionality. The EWEM however does not improve results as one might have expected. Our general impression is that the predictor using HMMs that we have developed and tested is too unstable to be taken in as a trading tool on foreign exchange data, with too many factors influencing the results. More research and development is called for.</p>
16

Modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look / Multi-look polarimetric SAR image segmentation using mixture models

Horta, Michelle Matos 04 June 2009 (has links)
Esta tese se concentra em aplicar os modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look. Dentro deste contexto, utilizou-se o algoritmo SEM em conjunto com os estimadores obtidos pelo método dos momentos para calcular as estimativas dos parâmetros do modelo de mistura das distribuições Wishart, Kp ou G0p. Cada uma destas distribuições possui parâmetros específicos que as diferem no ajuste dos dados com graus de homogeneidade variados. A distribuição Wishart descreve bem regiões com características mais homogêneas, como cultivo. Esta distribuição é muito utilizada na análise de dados SAR polarimétricos multi-look. As distribuições Kp e G0p possuem um parâmetro de rugosidade que as permitem descrever tanto regiões mais heterogêneas, como vegetação e áreas urbanas, quanto regiões homogêneas. Além dos modelos de mistura de uma única família de distribuições, também foi analisado o caso de um dicionário contendo as três famílias. Há comparações do método SEM proposto para os diferentes modelos com os métodos da literatura k-médias e EM utilizando imagens reais da banda L. O método SEM com a mistura de distribuições G0p forneceu os melhores resultados quando os outliers da imagem são desconsiderados. A distribuição G0p foi a mais flexível ao ajuste dos diferentes tipos de alvo. A distribuição Wishart foi robusta às diferentes inicializações. O método k-médias com a distribuição Wishart é robusto à segmentação de imagens contendo outliers, mas não é muito flexível à variabilidade das regiões heterogêneas. O modelo de mistura do dicionário de famílias melhora a log-verossimilhança do método SEM, mas apresenta resultados parecidos com os do modelo de mistura G0p. Para todos os tipos de inicialização e grupos, a distribuição G0p predominou no processo de seleção das distribuições do dicionário de famílias. / The main focus of this thesis consists of the application of mixture models in multi-look polarimetric SAR image segmentation. Within this context, the SEM algorithm, together with the method of moments, were applied in the estimation of the Wishart, Kp and G0p mixture model parameters. Each one of these distributions has specific parameters that allows fitting data with different degrees of homogeneity. The Wishart distribution is suitable for modeling homogeneous regions, like crop fields for example. This distribution is widely used in multi-look polarimetric SAR data analysis. The distributions Kp and G0p have a roughness parameter that allows them to describe both heterogeneous regions, as vegetation and urban areas, and homogeneous regions. Besides adopting mixture models of a single family of distributions, the use of a dictionary with all the three family of distributions was proposed and analyzed. Also, a comparison between the performance of the proposed SEM method, considering the different models in real L-band images and two widely known techniques described in literature (k-means and EM algorithms), are shown and discussed. The proposed SEM method, considering a G0p mixture model combined with a outlier removal stage, provided the best classication results. The G0p distribution was the most flexible for fitting the different kinds of data. The Wishart distribution was robust for different initializations. The k-means algorithm with Wishart distribution is robust for segmentation of SAR images containing outliers, but it is not so flexible to variabilities in heterogeneous regions. The mixture model considering the dictionary of distributions improves the SEM method log-likelihood, but presents similar results to those of G0p mixture model. For all types of initializations and clusters, the G0p prevailed in the distribution selection process of the dictionary of distributions.
17

Eκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής

Γρενζελιάς, Αναστάσιος 25 June 2009 (has links)
Στη συγκεκριμένη εργασία ασχολήθηκα με την εκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής που επεξεργάστηκα. Στο θεωρητικό κομμάτι το μεγαλύτερο ενδιαφέρον παρουσίασαν ο Μη Καταστροφικός Έλεγχος και η Ακουστική Εκπομπή, καθώς και οι εφαρμογές τους. Τα δεδομένα που επεξεργάστηκα χωρίζονται σε δύο κατηγορίες: σε εκείνα που μου δόθηκαν έτοιμα και σε εκείνα που λήφθηκαν μετά από μετρήσεις. Στην επεξεργασία των πειραματικών δεδομένων χρησιμοποιήθηκε ο αλγόριθμος πρόβλεψης-μεγιστοποίησης, τον οποίο μελέτησα θεωρητικά και με βάση τον οποίο εξάχθηκαν οι παράμετροι για κάθε σήμα. Έχοντας βρει τις παραμέτρους, προχώρησα στην ταξινόμηση των σημάτων σε κατηγορίες με βάση τη θεωρία της αναγνώρισης προτύπων. Στο τέλος της εργασίας παρατίθεται το παράρτημα με τα αναλυτικά αποτελέσματα, καθώς και η βιβλιογραφία που χρησιμοποίησα. / In this diploma paper the subject was the calculation of the probability density function of parameters which come from signals of sources of acoustic emission. In the theoritical part, the chapters with the greatest interest were Non Destructive Control and Acoustic Emission and their applications. The data which were processed are divided in two categories: those which were given without requiring any laboratory research and those which demanded laboratory research. The expectation-maximization algorithm, which was used in the process of the laboratory data, was the basis for the calculation of the parameters of each signal. Having calculated the parameters, the signals were classified in categories according to the theory of pattern recognition. In the end of the paper, the results and the bibliography which was used are presented.
18

Mixture model analysis with rank-based samples

Hatefi, Armin January 2013 (has links)
Simple random sampling (SRS) is the most commonly used sampling design in data collection. In many applications (e.g., in fisheries and medical research) quantification of the variable of interest is either time-consuming or expensive but ranking a number of sampling units, without actual measurement on them, can be done relatively easy and at low cost. In these situations, one may use rank-based sampling (RBS) designs to obtain more representative samples from the underlying population and improve the efficiency of the statistical inference. In this thesis, we study the theory and application of the finite mixture models (FMMs) under RBS designs. In Chapter 2, we study the problems of Maximum Likelihood (ML) estimation and classification in a general class of FMMs under different ranked set sampling (RSS) designs. In Chapter 3, deriving Fisher information (FI) content of different RSS data structures including complete and incomplete RSS data, we show that the FI contained in each variation of the RSS data about different features of FMMs is larger than the FI contained in their SRS counterparts. There are situations where it is difficult to rank all the sampling units in a set with high confidence. Forcing rankers to assign unique ranks to the units (as RSS) can lead to substantial ranking error and consequently to poor statistical inference. We hence focus on the partially rank-ordered set (PROS) sampling design, which is aimed at reducing the ranking error and the burden on rankers by allowing them to declare ties (partially ordered subsets) among the sampling units. Studying the information and uncertainty structures of the PROS data in a general class of distributions, in Chapter 4, we show the superiority of the PROS design in data analysis over RSS and SRS schemes. In Chapter 5, we also investigate the ML estimation and classification problems of FMMs under the PROS design. Finally, we apply our results to estimate the age structure of a short-lived fish species based on the length frequency data, using SRS, RSS and PROS designs.
19

Modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look / Multi-look polarimetric SAR image segmentation using mixture models

Michelle Matos Horta 04 June 2009 (has links)
Esta tese se concentra em aplicar os modelos de mistura de distribuições na segmentação de imagens SAR polarimétricas multi-look. Dentro deste contexto, utilizou-se o algoritmo SEM em conjunto com os estimadores obtidos pelo método dos momentos para calcular as estimativas dos parâmetros do modelo de mistura das distribuições Wishart, Kp ou G0p. Cada uma destas distribuições possui parâmetros específicos que as diferem no ajuste dos dados com graus de homogeneidade variados. A distribuição Wishart descreve bem regiões com características mais homogêneas, como cultivo. Esta distribuição é muito utilizada na análise de dados SAR polarimétricos multi-look. As distribuições Kp e G0p possuem um parâmetro de rugosidade que as permitem descrever tanto regiões mais heterogêneas, como vegetação e áreas urbanas, quanto regiões homogêneas. Além dos modelos de mistura de uma única família de distribuições, também foi analisado o caso de um dicionário contendo as três famílias. Há comparações do método SEM proposto para os diferentes modelos com os métodos da literatura k-médias e EM utilizando imagens reais da banda L. O método SEM com a mistura de distribuições G0p forneceu os melhores resultados quando os outliers da imagem são desconsiderados. A distribuição G0p foi a mais flexível ao ajuste dos diferentes tipos de alvo. A distribuição Wishart foi robusta às diferentes inicializações. O método k-médias com a distribuição Wishart é robusto à segmentação de imagens contendo outliers, mas não é muito flexível à variabilidade das regiões heterogêneas. O modelo de mistura do dicionário de famílias melhora a log-verossimilhança do método SEM, mas apresenta resultados parecidos com os do modelo de mistura G0p. Para todos os tipos de inicialização e grupos, a distribuição G0p predominou no processo de seleção das distribuições do dicionário de famílias. / The main focus of this thesis consists of the application of mixture models in multi-look polarimetric SAR image segmentation. Within this context, the SEM algorithm, together with the method of moments, were applied in the estimation of the Wishart, Kp and G0p mixture model parameters. Each one of these distributions has specific parameters that allows fitting data with different degrees of homogeneity. The Wishart distribution is suitable for modeling homogeneous regions, like crop fields for example. This distribution is widely used in multi-look polarimetric SAR data analysis. The distributions Kp and G0p have a roughness parameter that allows them to describe both heterogeneous regions, as vegetation and urban areas, and homogeneous regions. Besides adopting mixture models of a single family of distributions, the use of a dictionary with all the three family of distributions was proposed and analyzed. Also, a comparison between the performance of the proposed SEM method, considering the different models in real L-band images and two widely known techniques described in literature (k-means and EM algorithms), are shown and discussed. The proposed SEM method, considering a G0p mixture model combined with a outlier removal stage, provided the best classication results. The G0p distribution was the most flexible for fitting the different kinds of data. The Wishart distribution was robust for different initializations. The k-means algorithm with Wishart distribution is robust for segmentation of SAR images containing outliers, but it is not so flexible to variabilities in heterogeneous regions. The mixture model considering the dictionary of distributions improves the SEM method log-likelihood, but presents similar results to those of G0p mixture model. For all types of initializations and clusters, the G0p prevailed in the distribution selection process of the dictionary of distributions.
20

Topics in Network Utility Maximization : Interior Point and Finite-step Methods

Akhil, P T January 2017 (has links) (PDF)
Network utility maximization has emerged as a powerful tool in studying flow control, resource allocation and other cross-layer optimization problems. In this work, we study a flow control problem in the optimization framework. The objective is to maximize the sum utility of the users subject to the flow constraints of the network. The utility maximization is solved in a distributed setting; the network operator does not know the user utility functions and the users know neither the rate choices of other users nor the flow constraints of the network. We build upon a popular decomposition technique proposed by Kelly [Eur. Trans. Telecommun., 8(1), 1997] to solve the utility maximization problem in the aforementioned distributed setting. The technique decomposes the utility maximization problem into a user problem, solved by each user and a network problem solved by the network. We propose an iterative algorithm based on this decomposition technique. In each iteration, the users communicate to the network their willingness to pay for the network resources. The network allocates rates in a proportionally fair manner based on the prices communicated by the users. The new feature of the proposed algorithm is that the rates allocated by the network remains feasible at all times. We show that the iterates put out by the algorithm asymptotically tracks a differential inclusion. We also show that the solution to the differential inclusion converges to the system optimal point via Lyapunov theory. We use a popular benchmark algorithm due to Kelly et al. [J. of the Oper. Res. Soc., 49(3), 1998] that involves fast user updates coupled with slow network updates in the form of additive increase and multiplicative decrease of the user flows. The proposed algorithm may be viewed as one with fast user update and fast network update that keeps the iterates feasible at all times. Simulations suggest that our proposed algorithm converges faster than the aforementioned benchmark algorithm. When the flows originate or terminate at a single node, the network problem is the maximization of a so-called d-separable objective function over the bases of a polymatroid. The solution is the lexicographically optimal base of the polymatroid. We map the problem of finding the lexicographically optimal base of a polymatroid to the geometrical problem of finding the concave cover of a set of points on a two-dimensional plane. We also describe an algorithm that finds the concave cover in linear time. Next, we consider the minimization of a more general objective function, i.e., a separable convex function, over the bases of a polymatroid with a special structure. We propose a novel decomposition algorithm and show the proof of correctness and optimality of the algorithm via the theory of polymatroids. Further, motivated by the need to handle piece-wise linear concave utility functions, we extend the decomposition algorithm to handle the case when the separable convex functions are not continuously differentiable or not strictly convex. We then provide a proof of its correctness and optimality.

Page generated in 0.0795 seconds