• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 26
  • 13
  • 12
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 162
  • 162
  • 26
  • 26
  • 24
  • 22
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A Comparative Study of the Particle Filter and the Ensemble Kalman Filter

Datta Gupta, Syamantak January 2009 (has links)
Non-linear Bayesian estimation, or estimation of the state of a non-linear stochastic system from a set of indirect noisy measurements is a problem encountered in several fields of science. The particle filter and the ensemble Kalman filter are both used to get sub-optimal solutions of Bayesian inference problems, particularly for high-dimensional non-Gaussian and non-linear models. Both are essentially Monte Carlo techniques that compute their results using a set of estimated trajectories of the variable to be monitored. It has been shown that in a linear and Gaussian environment, solutions obtained from both these filters converge to the optimal solution obtained by the Kalman Filter. However, it is of interest to explore how the two filters compare to each other in basic methodology and construction, especially due to the similarity between them. In this work, we take up a specific problem of Bayesian inference in a restricted framework and compare analytically the results obtained from the particle filter and the ensemble Kalman filter. We show that for the chosen model, under certain assumptions, the two filters become methodologically analogous as the sample size goes to infinity.
82

Novel Bayesian multiscale methods for image denoising using alpha-stable distributions

Achim, Alin 19 January 2009 (has links)
Before launching into ultrasound research, it is important to recall that the ultimate goal is to provide the clinician with the best possible information needed to make an accurate diagnosis. Ultrasound images are inherently affected by speckle noise, which is due to image formation under coherent waves. Thus, it appears to be sensible to reduce speckle artifacts before performing image analysis, provided that image texture that might distinguish one tissue from another is preserved. The main goal of this thesis was the development of novel speckle suppression methods from medical ultrasound images in the multiscale wavelet domain. We started by showing, through extensive modeling, that the subband decompositions of ultrasound images have significantly non-Gaussian statistics that are best described by families of heavy-tailed distributions such as the alpha-stable. Then, we developed Bayesian estimators that exploit these statistics. We used the alpha-stable model to design both the minimum absolute error (MAE) and the maximum a posteriori (MAP) estimators for alpha-stable signal mixed in Gaussian noise. The resulting noise-removal processors perform non-linear operations on the data and we relate this non-linearity to the degree of non-Gaussianity of the data. We compared our techniques to classical speckle filters and current state-of-the-art soft and hard thresholding methods applied on actual ultrasound medical images and we quantified the achieved performance improvement. Finally, we have shown that our proposed processors can find application in other areas of interest as well, and we have chosen as an illustrative example the case of synthetic aperture radar (SAR) images. / Ο απώτερος σκοπός της έρευνας που παρουσιάζεται σε αυτή τη διδακτορική διατριβή είναι η διάθεση στην κοινότητα των κλινικών επιστημόνων μεθόδων οι οποίες να παρέχουν την καλύτερη δυνατή πληροφορία για να γίνει μια σωστή ιατρική διάγνωση. Οι εικόνες υπερήχων προσβάλλονται ενδογενώς από θόρυβο, ο οποίος οφείλεται στην διαδικασία δημιουργίας των εικόνων μέσω ακτινοβολίας που χρησιμοποιεί σύμφωνες κυματομορφές. Είναι σημαντικό πριν τη διαδικασία ανάλυσης της εικόνας να γίνεται απάλειψη του θορύβου με κατάλληλο τρόπο ώστε να διατηρείται η υφή της εικόνας, η οποία βοηθά στην διάκριση ενός ιστού από έναν άλλο. Κύριος στόχος της διατριβής αυτής υπήρξε η ανάπτυξη νέων μεθόδων καταστολής του θορύβου σε ιατρικές εικόνες υπερήχων στο πεδίο του μετασχηματισμού κυματιδίων. Αρχικά αποδείξαμε μέσω εκτενών πειραμάτων μοντελοποίησης, ότι τα δεδομένα που προκύπτουν από τον διαχωρισμό των εικόνων υπερήχων σε υποπεριοχές συχνοτήτων περιγράφονται επακριβώς από μη-γκαουσιανές κατανομές βαρέων ουρών, όπως είναι οι άλφα-ευσταθείς κατανομές. Κατόπιν, αναπτύξαμε Μπεϋζιανούς εκτιμητές που αξιοποιούν αυτή τη στατιστική περιγραφή. Πιο συγκεκριμένα, χρησιμοποιήσαμε το άλφα-ευσταθές μοντέλο για να σχεδιάσουμε εκτιμητές ελάχιστου απόλυτου λάθος και μέγιστης εκ των υστέρων πιθανότητας για άλφα-ευσταθή σήματα αναμεμειγμένα με μη-γκαουσιανό θόρυβο. Οι επεξεργαστές αφαίρεσης θορύβου που προέκυψαν επενεργούν κατά μη-γραμμικό τρόπο στα δεδομένα και συσχετίζουν με βέλτιστο τρόπο αυτή την μη-γραμμικότητα με τον βαθμό κατά τον οποίο τα δεδομένα είναι μη-γκαουσιανά. Συγκρίναμε τις τεχνικές μας με κλασσικά φίλτρα καθώς και σύγχρονες μεθόδους αυστηρού και μαλακού κατωφλίου εφαρμόζοντάς τες σε πραγματικές ιατρικές εικόνες υπερήχων και ποσοτικοποιήσαμε την απόδοση που επιτεύχθηκε. Τέλος, δείξαμε ότι οι προτεινόμενοι επεξεργαστές μπορούν να βρουν εφαρμογές και σε άλλες περιοχές ενδιαφέροντος και επιλέξαμε ως ενδεικτικό παράδειγμα την περίπτωση εικόνων ραντάρ συνθετικής διατομής.
83

A bayesian solution for the law of categorical judgment with category boundary variability and examination of robustness to model violations

King, David R. 12 January 2015 (has links)
Previous solutions for the the Law of Categorical Judgment with category boundary variability have either constrained the standard deviations of the category boundaries in some way or have violated the assumptions of the scaling model. In the current work, a fully Bayesian Markov chain Monte Carlo solution for the Law of Categorical Judgment is given that estimates all model parameters (i.e. scale values, category boundaries, and the associated standard deviations). The importance of measuring category boundary standard deviations is discussed in the context of previous research in signal detection theory, which gives evidence of interindividual variability in how respondents perceive category boundaries and even intraindividual variability in how a respondent perceives category boundaries across trials. Although the measurement of category boundary standard deviations appears to be important for describing the way respondents perceive category boundaries on the latent scale, the inclusion of category boundary standard deviations in the scaling model exposes an inconsistency between the model and the rating method. Namely, with category boundary variability, the scaling model suggests that a respondent could experience disordinal category boundaries on a given trial. However, the idea that a respondent actually experiences disordinal category boundaries seems unlikely. The discrepancy between the assumptions of the scaling model and the way responses are made at the individual level indicates that the assumptions of the model will likely not be met. Therefore, the current work examined how well model parameters could be estimated when the assumptions of the model were violated in various ways as a consequence of disordinal category boundary perceptions. A parameter recovery study examined the effect of model violations on estimation accuracy by comparing estimates obtained from three response processes that violated the assumptions of the model with estimates obtained from a novel response process that did not violate the assumptions of the model. Results suggest all parameters in the Law of Categorical Judgment can be estimated reasonably well when these particular model violations occur, albeit to a lesser degree of accuracy than when the assumptions of the model are met.
84

Sticky information and non-pricing policies in DSGE models

Molinari, Benedetto 19 September 2008 (has links)
La tesis consta de dos partes. En la primera parte se analiza la relación entre las fricciones en los flujos de información que llegan a la empresa y la persistencia del patrón de la inflación. En particular, se presenta un nuevo estimador por el modelo de Makiw y Reis (2002) "Sticky Information Phillips Curve", y se aplica usando datos trimestrales de EE.UU. El resultado principal es que el modelo tan solo puede explicar la persistencia de la inflación asumiendo que la variancia de la inflación sea mucho mas grande de la que observamos o, equivalentemente, que el modelo no puede explicar conjuntamente la variancia y la persistencia de la inflación.En la segunda parte se presentan nuevas evidencias sobre la publicidad agregada en EE.UU. y se estudian los efectos de la publicidad en la economía usando un modelo dinámico estocástico de equilibrio general. En particular, el capitulo 2 se enfoca en las relaciones de corto plazo entre las mas comunes variables macroeconómicas - consumo agregado, producto interno bruto, totalidad de horas trabajadas en la economía - y la publicidad agregada, con particular atención a la relación de causalidad entre publicidad y consumo. En cambio, el capitulo 3 se enfoca sobre las relaciones de largo plazo, enseñando como la publicidad agregada afecte el nivel de trabajo de la economía. A través del modelo presentado en el capitulo 2, se demuestra que un mayor nivel de publicidad implica un mayor números de oras trabajadas asociadas con un menor nivel de bienestar por los consumidores. / This thesis is organized in two parts. In the first one, I seek to understand the relationship between frictions in information flows among firms and inflation persistence. To this end, I present a novel estimator for the Sticky Information Phillips Curve (Mankiw and Reis, 2002), and I use it to estimate this model with U.S. postwar data. The main result is that the Sticky Information Phillips Curve can match inflation persistence only at the cost of mispredicting inflation variance. I conclude that the Sticky Information Phillips Curve is a valid model to explain inflation persistence but not an overall valid theory of inflation. The second part presents new evidence about aggregate advertising expenditures in U.S., and analyzes the effect of advertising in the aggregate economy by the mean of a dynamic stochastic general equilibrium model. Chapter 2 focuses on the short run impact of advertising on the aggregate dynamics, and shows that an increase in aggregate advertising significantly increases the aggregate consumption. Chapter 3 focuses on the long run effects of advertising on the labor supply, showing that in economies where aggregate advertising is higher, agents supply more hours of works and are generally worse off in terms of welfare.
85

Essays on Money, Credit and Fiscal Policy

Sessa, Luca 27 July 2011 (has links)
This thesis tackles three different issues of relevance for economic policy, with an explicit reference to the Euro area. Does the inclusion of monetary targeting in a monetary policy strategy improve macroeconomic stability? Which role does the banking sector play in the impulse and transmission of shocks? Which fiscal tools have the greatest and the most persistent impact on the real economy, helping effective stabilization policy design? Answers to each question, derived from data-matching dynamic general equilibrium models, imply noteworthy indications for policy-makers. / Esta tesis afronta tres temas de relevancia en lo que se refiere a la política económica en la zona euro. ¿Establecer un objetivo monetario en la conducción de la política monetaria contribuye a alcanzar una estabilidad macroeconómica? ¿Qué papel desempeña el sector banquero en el impulso y en la transmisión de choques macroeconómicos? ¿Cuales son los instrumentos de política fiscal con el mayor y más persistente impacto sobre la economía real, capaces de ayudar en el diseño de políticas de estabilización eficaces? Las respuestas a cada pregunta, derivadas desde modelos de equilibrio económico general dinámicos ajustados a los datos, permiten extraer indicaciones útiles para las autoridades responsables de las políticas económicas.
86

Reachable sets analysis in the cooperative control of pursuer vehicles.

Chung, Chern Ferng, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2008 (has links)
This thesis is concerned with the Pursuit-and-Evasion (PE) problem where the pursuer aims to minimize the time to capture the evader while the evader tries to prevent capture. In the problem, the evader has two advantages: a higher manoeuvrability and that the pursuer is uncertain about the evader??s state. Cooperation among multiple pursuer vehicles can thus be used to overcome the evader??s advantages. The focus here is on the formulation and development of frameworks and algorithms for cooperation amongst pursuers, aiming at feasible implementation on real and autonomous vehicles. The thesis is split into Parts I and II. Part I considers the problem of capturing an evader of higher manoeuvrability in a deterministic PE game. The approach is the employment of Forward Reachable Set (FRS) analysis in the pursuers?? control. The analysis considers the coverage of the evader??s FRS, which is the set of reachable states at a future time, with the pursuer??s FRS and assumes that the chance of capturing the evader is dependent on the degree of the coverage. Using the union of multiple pursuers?? FRSs intuitively leads to more evader FRS coverage and this forms the mechanism of cooperation. A framework for cooperative control based on the FRS coverage, or FRS-based control, is proposed. Two control algorithms were developed within this framework. Part II additionally introduces the problem of evader state uncertainty due to noise and limited field-of-view of the pursuers?? sensors. A search-and-capture (SAC) problem is the result and a hybrid architecture, which includes multi-sensor estimation using the Particle Filter as well as FRS-based control, is proposed to accomplish the SAC task. The two control algorithms in Part I were tested in simulations against an optimal guidance algorithm. The results show that both algorithms yield a better performance in terms of time and miss distance. The results in Part II demonstrate the effectiveness of the hybrid architecture for the SAC task. The proposed frameworks and algorithms provide insights for the development of effective and more efficient control of pursuer vehicles and can be useful in the practical applications such as defence systems and civil law enforcement.
87

Regressão binária usando ligações potência e reversa de potência / Binary regression using power and reversal power links

Chumbimune Anyosa, Susan Alicia 07 April 2017 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-08-17T17:26:59Z No. of bitstreams: 1 DissSACA.pdf: 2241501 bytes, checksum: b88dd9ad345544bce3926b892f257af7 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-17T17:27:07Z (GMT) No. of bitstreams: 1 DissSACA.pdf: 2241501 bytes, checksum: b88dd9ad345544bce3926b892f257af7 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-17T17:27:13Z (GMT) No. of bitstreams: 1 DissSACA.pdf: 2241501 bytes, checksum: b88dd9ad345544bce3926b892f257af7 (MD5) / Made available in DSpace on 2017-08-17T17:27:20Z (GMT). No. of bitstreams: 1 DissSACA.pdf: 2241501 bytes, checksum: b88dd9ad345544bce3926b892f257af7 (MD5) Previous issue date: 2017-04-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / The aim of this dissertation is to study a family of asymmetric link functions for binary regression models under Bayesian approach. Specifically, we present the estimation of parameters of power and reversal power binary regression models considering Hamiltonian Monte Carlo method, on No-U-Turn Sampler extension, and Metropolis-Hastings within Gibbs sampling method. Furthermore, we study a wide variety of model comparison measures, including information criteria and measures of predictive evaluation. A simulation study was conducted in order to research accuracy and efficiency on estimated parameters. Through analysis of educational data we show that models using the proposed link functions perform better fit than models using standard links. / O objetivo desta dissertação é estudar uma família de ligações assimétricas para modelos de regressão binária sob a abordagem bayesiana. Especificamente, apresenta-se a estimação dos parâmetros da família de modelos de regressão binária com funções de ligação potência e reversa de potência considerando o método de estimação Monte Cario Hamiltoniano, na extensão No-U-Turn Sampler, e o método Metropolis-Hastings dentro de Gibbs. Além disso, estudam-se diferentes medidas de comparação de modelos, incluindo critérios de informação e de avaliação preditiva. Um estudo de simulação foi desenvolvido para estudar a acurácia e eficiência nos parâmetros estimados. Através da análise de dados educacionais, mostra-se que os modelos usando as ligações propostas apresentam melhor ajuste do que os modelos usando ligações tradicionais.
88

Impacto da política fiscal sobre a taxa de câmbio : análise para o caso brasileiro através de um modelo DSGE com economia aberta

Frank Junior, Oscar André January 2012 (has links)
O objetivo do presente trabalho é avaliar o impacto da política fiscal sobre as variáveis de economia aberta, incluindo a taxa de câmbio. Para tanto, faz-se uso de um modelo DSGE com setor externo para o Brasil, tendo por base Grith (2007). Essa abordagem apresenta vantagens significativas em relação à literatura existente, como: (i) a presença de uma autoridade fiscal; (ii) rigidez nominal de preços e salários, (iii) uma Regra de Taylor, condizente com o sistema de Metas de Inflação; e (iv) a possibilidade de avaliar o impacto de choques gerados no país estrangeiro – no caso, os Estados Unidos –, sobre a economia local. Os resultados do modelo estimado, com dados trimestrais entre 2000 e 2011, sugerem que, entre as tributações sobre consumo, salário, capital e gastos do governo, a política fiscal que mais surte efeito sobre as variáveis do setor externo é a última. Além disso, é a política monetária que provoca o maior efeito em magnitude sobre a taxa de câmbio. / The present work aims to evaluate the fiscal policy impact on the open economy variables, including the exchange rate. In order to do this, it is used an DSGE model with external sector for Brazil, having Grith (2007) as a basis. This approach has significant advantages compared to the existing literature, such as: (i) the presence of a fiscal authority; nominal rigidity of prices and wages; (iii) a Taylor Rule, consistent with a Inflation Targeting system; and (iv) the possibility to evaluate the impact of shocks generated in the foreign country - in this case, the United States - under the local economy. The results of the estimated model suggest that among consumption, wage, capital taxations and government expenditures, the fiscal policy that has the biggest effect on the external sector variables is the last one. Furthermore, the monetary policy causes the greatest effect on the exchange rate.
89

Computing optimal and realised monetary policy rules for Brazil : a markov-switching dsge approach

Paranhos, Lívia Silva January 2017 (has links)
A evolução da economia brasileira durante os primeiros anos do século XXI é examinada através de um modelo microfundamentado de uma pequena economia aberta, permitindo mudanças no comportamento do Banco Central do Brasil, no parâmetro de rigidez nominal e na volatilidade dos choques estruturais. Mesmo os resultados não sendo conclusivos a respeito da presença de mudanças de regime durante o período analisado, encontramos evidências de troca de regime no âmbito da política monetária, passando em 2003 de um regime Dove para um regime Hawk, assim como evidências de choques externos mais voláteis durante períodos de incerteza. Na sequência, deixamos de lado a estimação empírica e derivamos regras de política monetária ótima para o caso brasileiro. É possível encontrar uma regra ótima capaz de estabilizar a inflação, o produto e a taxa de câmbio, mantendo uma taxa de juros estável. Por fim, o modelo trás uma discussão interessante sobre a dinâmica de determinadas variáveis macroeconômicas: uma moeda mais estável implica em uma taxa de juros mais volátil, e vice versa; um maior controle sobre a taxa de juros e/ou sobre a taxa de câmbio parece gerar uma maior instabilidade do produto e da inflação. / The evolution of the Brazilian economy during the first years of this century is examined through the lens of a micro-founded small open economy model that allows for changes in the behaviour of the Central Bank of Brazil, in the nominal price rigidity and in the volatility of structural shocks. Although the results are not conclusive about the presence of regime changes during the analysed sample, we find evidences in favour of shifts in the monetary policy stance, moving from a Dove to a Hawk regime in 2003, as well as evidences of more volatile external shocks during uncertainty periods. We further move away from the empirical estimation and derive optimal monetary policy rules for Brazil. It is possible to find an optimal rule that is successful in stabilizing inflation, output and exchange rates, whilst keeping interest rates stable. Finally, the model offers interesting insights about the standard deviation dynamics of macroeconomic variables: a more stable currency implies a more volatile interest rate and vice versa, and a higher control over interest rates and/or exchange rates seem to produce output and inflation instability.
90

O valor futuro de cada cliente : estimação do Customer Lifetime Value

Silveira, Rodrigo Heldt January 2014 (has links)
A capacidade de o marketing mensurar e comunicar o valor de suas atividades e investimentos tem sido uma das prioridades de pesquisa na área nos últimos anos. Para atingir esse objetivo, a capacidade de mensurar adequadamente os ativos de marketing, como o Customer Lifetime Value e, de forma agregada, o Customer Equity, torna-se essencial, pois esses ativos são considerados os elementos capazes de traduzir em valores monetários o resultado dos diversos investimentos realizados pela área de marketing. Diante da mensuração desses valores, é possível o planejamento e a realização de ações mais precisas por parte dos profissionais de marketing. Sendo assim, no presente estudo objetivou-se construir e aplicar um modelo de estimação de Customer Lifetime Value no modo bottom-up (individual por cliente) em uma amostra de clientes de uma empresa do setor de serviços financeiros. O modelo bayesiano hierárquico aplicado, com três regressões estruturadas conforme o modelo Seemingly Unrelated Regressions (SUR) (ZELNER, 1971), foi construído a partir dos trabalhos de Kumar et al. (2008), Kumar e Shah (2009) e Cowles, Carlin e Connet (1996). Os resultados evidenciaram (1) que o modelo foi capaz de estimar com consistência o valor futuro de 84% dos clientes analisados; (2) que esse valor estimado traduz o potencial de rentabilidade que pode ser esperado futuramente para cada cliente; (3) que a base de clientes pode ser segmentada a partir do Customer Lifetime Value. Diante do conhecimento do valor futuro de cada cliente, se vislumbrou possibilidades de ações que tragam melhorias para gestão de clientes tradicionalmente utilizada, principalmente no que diz respeito à alocação dos recursos de marketing. / The marketing capacity to measure and to communicate the value resultant of its activities and investments has been one of the area top research priorities in the last few years. In order to achieve this objective, the capacity to appropriately measure the marketing assets, as the Customer Lifetime Value and, in aggregate form, the Customer Equity, has been pointed out as essential, because this assets are considered elements capable of translating the result of marketing investments into monetary values. Given the measurement of those values, marketers become able to plan and take more precise actions. Thus, the objective of present study is to build and test a bottom-up Customer Lifetime Value estimation model to a sample of customers from a company of finance services. The bayesian hierarchical model, composed of three regressions structured according to the Seemingly Unrelated Regressions (SUR) model (ZELNER, 1971), was built from the works of Kumar et al. (2008), Kumar and Shah (2009) and Cowles, Carlin and Connet (1996). The results show that (1) the model was capable to estimate with consistency the future value of 84% of the analyzed customers; (2) this estimated future values indicate the potential profitability of each customer; (3) the customer base can be segmented from the Customer Lifetime Value. Given the knowledge obtained about the future value of each customer and the segments established, several actions that can bring improvements to the traditional way of managing customers were suggested, in special those concerning marketing resource allocation.

Page generated in 0.1682 seconds