Spelling suggestions: "subject:"timevarying"" "subject:"time·varying""
101 |
Time-varying Phononic CrystalsWright, Derek 02 September 2010 (has links)
The primary objective of this thesis was to gain a deeper understanding of acoustic wave propagation in phononic crystals, particularly those that include materials whose properties can be varied periodically in time. This research was accomplished in three ways.
First, a 2D phononic crystal was designed, created, and characterized. Its properties closely matched those determined through simulation. The crystal demonstrated band gaps, dispersion, and negative refraction. It served as a means of elucidating the practicalities of phononic crystal design and construction and as a physical verification of their more interesting properties.
Next, the transmission matrix method for analyzing 1D phononic crystals was extended to include the effects of time-varying material parameters. The method was then used to provide a closed-form solution for the case of periodically time-varying material parameters. Some intriguing results from the use of the extended method include dramatically altered transmission properties and parametric amplification. New insights can be gained from the governing equations and have helped to identify the conditions that lead to parametric amplification in these structures.
Finally, 2D multiple scattering theory was modified to analyze scatterers with time-varying material parameters. It is shown to be highly compatible with existing multiple scattering theories. It allows the total scattered field from a 2D time-varying phononic crystal to be determined.
It was shown that time-varying material parameters significantly affect the phononic crystal transmission spectrum, and this was used to switch an incident monochromatic wave. Parametric amplification can occur under certain circumstances, and this effect was investigated using the closed-form solutions provided by the new 1D method.
The complexity of the extended methods grows logarithmically as opposed linearly with existing methods, resulting in superior computational complexity for large numbers of scatterers. Also, since both extended methods provide analytic solutions, they may give further insights into the factors that govern the behaviour of time-varying phononic crystals. These extended methods may now be used to design an active phononic crystal that could demonstrate new or enhanced properties.
|
102 |
Generalizing sampling theory for time-varying Nyquist rates using self-adjoint extensions of symmetric operators with deficiency indices (1,1) in Hilbert spacesHao, Yufang January 2011 (has links)
Sampling theory studies the equivalence between continuous and discrete representations of information. This equivalence is ubiquitously used in communication engineering and signal processing. For example, it allows engineers to store continuous signals as discrete data on digital media.
The classical sampling theorem, also known as the theorem of Whittaker-Shannon-Kotel'nikov, enables one to perfectly and stably reconstruct continuous signals with a constant bandwidth from their discrete samples at a constant Nyquist rate. The Nyquist rate depends on the bandwidth of the signals, namely, the frequency upper bound. Intuitively, a signal's `information density' and `effective bandwidth' should vary in time. Adjusting the sampling rate accordingly should improve the sampling efficiency and information storage. While this old idea has been pursued in numerous publications, fundamental problems have remained: How can a reliable concept of time-varying bandwidth been defined? How can samples taken at a time-varying Nyquist rate lead to perfect and stable reconstruction of the continuous signals?
This thesis develops a new non-Fourier generalized sampling theory which takes samples only as often as necessary at a time-varying Nyquist rate and maintains the ability to perfectly reconstruct the signals. The resulting Nyquist rate is the critical sampling rate below which there is insufficient information to reconstruct the signal and above which there is redundancy in the stored samples. It is also optimal for the stability of reconstruction.
To this end, following work by A. Kempf, the sampling points at a Nyquist rate are identified as the eigenvalues of self-adjoint extensions of a simple symmetric operator with deficiency indices (1,1). The thesis then develops and in a sense completes this theory. In particular, the thesis introduces and studies filtering, and yields key results on the stability and optimality of this new method. While these new results should greatly help in making time-variable sampling methods applicable in practice, the thesis also presents a range of new purely mathematical results. For example, the thesis presents new results that show how to explicitly calculate the eigenvalues of the complete set of self-adjoint extensions of such a symmetric operator in the Hilbert space. This result is of interest in the field of functional analysis where it advances von Neumann's theory of self-adjoint extensions.
|
103 |
Fixed-analysis adaptive-synthesis filter banksLettsome, Clyde Alphonso 07 April 2009 (has links)
Subband/Wavelet filter analysis-synthesis filters are a major component in many compression algorithms. Such compression algorithms have been applied to images, voice, and video. These algorithms have achieved high performance. Typically, the configuration for such compression algorithms involves a bank of analysis filters whose coefficients have been designed in advance to enable high quality reconstruction. The analysis system is then followed by subband quantization and decoding on the synthesis side. Decoding is performed using a corresponding set of synthesis filters and the subbands are merged together. For many years, there has been interest in improving the analysis-synthesis filters in order to achieve better coding quality. Adaptive filter banks have been explored by a number of authors where by the analysis filters and synthesis filters coefficients are changed dynamically in response to the input. A degree of performance improvement has been reported but this approach does require that the analysis system dynamically maintain synchronization with the synthesis system in order to perform reconstruction.
In this thesis, we explore a variant of the adaptive filter bank idea. We will refer to this approach as fixed-analysis adaptive-synthesis filter banks. Unlike the adaptive filter banks proposed previously, there is no analysis synthesis synchronization issue involved. This implies less coder complexity and more coder flexibility. Such an approach can be compatible with existing subband wavelet encoders. The design methodology and a performance analysis are presented.
|
104 |
Essays on Energy and Regulatory ComplianceCancho Diez, Cesar 2012 August 1900 (has links)
This dissertation contains two essays on the analysis of market imperfections. In the first essay, I empirically test whether in a three-level hierarchy with asymmetries of information, more competition among intermediaries leads to more deception against the principal. In this setting, intermediaries supervise agents by delegation of the principal, and compete among themselves to provide supervision services to the agents. They cannot be perfectly monitored, therefore allowing them to manipulate supervision results in favor of the agents, and potentially leading to less than optimal outcomes for the principal. Using inspection-level data from the vehicular inspection program in Atlanta, I test for the existence of inspection deception (false positives), and whether this incidence is a function of the number of local competitors by station. I estimate the incidence of the most common form of false positives (clean piping) to be 9% of the passing inspections during the sample period. Moreover, the incidence of clean piping -- passing results of a different vehicle fraudulently applied to a failing vehicle -- per station increases by 0.7% with one more competitor within a 0.5 mile radius. These results are consistent with the presence of more competitors exacerbating the perverse incentives introduced by competition under this setting.
In the second essay, we test whether electricity consumption by industrial and commercial customers responds to real-time prices after these firms sign-up for prices linked to the electricity wholesale market price. In principle, time-varying prices (TVP) can mitigate market power in wholesale markets and promote the integration of intermittent generation sources such as wind and solar power. However, little is known about the prevalence of TVP, especially in deregulated retail markets where customers can choose whether to adopt TVP, and how these firms change their consumption after
signing up for this type of tariff. We study firm-level data on commercial and industrial customers in Texas, and estimate the magnitude of demand responsiveness using demand equations that consider the restrictions imposed by the microeconomic theory. We find a meaningful level of take-up of TVP ? in some sectors more than one-quarter of customers signed up for TVP. Nevertheless, the estimated price responsiveness of consumption is still small. Estimations by size and by type of industry show that own price elasticities are in most cases below 0.01 in absolute value. In the only cases that own price elasticities reach 0.02 in absolute value, the magnitude of demand response compared to the aggregate demand is negligible.
|
105 |
Optimal hedging strategy in stock index future marketsXu, Weijun, Banking & Finance, Australian School of Business, UNSW January 2009 (has links)
In this thesis we search for optimal hedging strategy in stock index futures markets by providing a comprehensive comparison of variety types of models in the related literature. We concentrate on the strategy that minimizes portfolio risk, i.e., minimum variance hedge ratio (MVHR) estimated from a range of time series models with different assumptions of market volatility. There are linear regression models assuming time-invariant volatility; GARCH-type models capturing time-varying volatility, Markov regime switching (MRS) regression models assuming state-varying volatility, and MRS-GARCH models capturing both time-varying and state-varying volatility. We use both Maximum Likelihood Estimation (MLE) and Bayesian Gibbs-Sampling approach to estimate the models with four commonly used index futures contracts: S&P 500, FTSE 100, Nikkei 225 and Hang Seng index futures. We apply risk reduction and utility maximization criterions to evaluate hedging performance of MVHRs estimated from these models. The in-sample results show that the optimal hedging strategy for the S&P 500 and the Hang Seng index futures contracts is the MVHR estimated using the MRS-OLS model, while the optimal hedging strategy for the Nikkei 225 and the FTSE 100 futures contracts is the MVHR estimated using the Asymmetric-Diagonal-BEKK-GARCH and the Asymmetric-DCC-GARCH model, respectively. As in the out-of sample investigation, the time-varying models such as the BEKK-GARCH models especially the Scalar-BEKK model outperform those state-varying MRS models in majority of futures contracts in both one-step- and multiple-step-ahead forecast cases. Overall the evidence suggests that there is no single model that can consistently produce the best strategy across different index futures contracts. Moreover, using more sophisticated models such as MRS-GARCH models provide some benefits compared with their corresponding single-state GARCH models in the in-sample case but not in the out-of-sample case. While comparing with other types of models MRS-GARCH models do not necessarily improve hedging efficiency. Furthermore, there is evidence that using Bayesian Gibbs-sampling approach to estimate the MRS models provides investors more efficient hedging strategy compared with the MLE method.
|
106 |
Modeling the Dynamics on the Effectiveness of Marketing Mix ElementsGreene, Mallik 06 August 2014 (has links)
The objective of this study is to conduct a marketing mix modeling to measure the effectiveness of past marketing activities on the product sales using a time-varying effect model (TVEM) approach. The longitudinal intensive data for this study has come from a large ice cream manufacturer in USA. Traditionally, static regression models have been used to measure the effectiveness of marketing mix variables to predict sales. And, these models used to find the time independent effect of the covariate on the dependent variable. On the other hand, a dynamic model such as time-varying effect model takes time into consideration. Researchers can model the changes in the relationship between dependent and independent variables over time using time-varying effect model. This is the first study, where a time-varying effect model approach has been used to measure the effectiveness of marketing mix elements in the ice cream industry. In addition, we have compared the predictive validity of both static and dynamic models using this data set.
|
107 |
New VAR evidence on monetary transmission channels: temporary interest rate versus inflation target shocksLukmanova, Elizaveta, Rabitsch, Katrin 11 1900 (has links) (PDF)
We augment a standard monetary VAR on output growth, inflation and the nominal interest rate with the central bank's inflation target, which we estimate from a New Keynesian DSGE model. Inflation target shocks give rise to a simultaneous increase in inflation and the nominal interest rate in the short run, at no output expense, which stands at the center of an active current debate on the Neo-Fisher effect. In addition, accounting for persistent monetary policy changes reflected in inflation target changes improves identification of a standard temporary nominal interest rate shock in that it strongly alleviates the price puzzle. / Series: Department of Economics Working Paper Series
|
108 |
Um modelo "time-varying markov-Switching" para crescimento econômico e algoritmo de estimaçãoMorier, Bruno do Nascimento 20 January 2011 (has links)
Submitted by Cristiane Oliveira (cristiane.oliveira@fgv.br) on 2011-06-03T15:43:32Z
No. of bitstreams: 1
66080100271.pdf: 3189410 bytes, checksum: 304d5ac579e5b1e8f9ea5445815149e1 (MD5) / Approved for entry into archive by Vera Lúcia Mourão(vera.mourao@fgv.br) on 2011-06-03T16:52:55Z (GMT) No. of bitstreams: 1
66080100271.pdf: 3189410 bytes, checksum: 304d5ac579e5b1e8f9ea5445815149e1 (MD5) / Approved for entry into archive by Vera Lúcia Mourão(vera.mourao@fgv.br) on 2011-06-03T17:03:18Z (GMT) No. of bitstreams: 1
66080100271.pdf: 3189410 bytes, checksum: 304d5ac579e5b1e8f9ea5445815149e1 (MD5) / Made available in DSpace on 2011-06-03T17:08:11Z (GMT). No. of bitstreams: 1
66080100271.pdf: 3189410 bytes, checksum: 304d5ac579e5b1e8f9ea5445815149e1 (MD5)
Previous issue date: 2011-01-20 / Este trabalho elabora um modelo para investigação do padrão de variação do crescimento econômico, entre diferentes países e através do tempo, usando um framework Markov- Switching com matriz de transição variável. O modelo desenvolvido segue a abordagem de Pritchett (2003), explicando a dinâmica do crescimento a partir de uma coleção de diferentes estados – cada qual com seu sub-modelo e padrão de crescimento – através dos quais os países oscilam ao longo do tempo. A matriz de transição entre os diferentes estados é variante no tempo, dependendo de variáveis condicionantes de cada país e a dinâmica de cada estado é linear. Desenvolvemos um método de estimação generalizando o Algoritmo EM de Diebold et al. (1993) e estimamos um modelo-exemplo em painel com a matriz de transição condicionada na qualidade das instituições e no nível de investimento. Encontramos três estados de crescimento: crescimento estável, ‘milagroso’ e estagnação - virtualmente coincidentes com os três primeiros de Jerzmanowski (2006). Os resultados mostram que a qualidade das instituições é um importante determinante do crescimento de longo prazo enquanto o nível de investimento tem papel diferenciado: contribui positivamente em países com boa qualidade de instituições e tem papel pouco relevante para os países com instituições medianas ou piores. / In this paper we build a model to investigate growth’s pattern of variation, across and within countries using a Time-Varying Transition Matrix Markov-Switching framework. The resulting model follows the Pritchett (2003) approach, describing growth’s dynamics using a collection of different states – each with its own sub-model and growth pattern – by which countries vary over time. The Transition Matrix is Time-Varying, depending on the conditioning variables in each country and the dynamics of each state is linear. We develop an estimation method generalizing the EM algorithm of Diebold et al. (1993) and estimate a sample model with the Transition Matrix conditioned on the quality of institutions and the level of investment. We found three growth stages: stable growth, 'miracle' growth and stagnation - virtually identical to the first three of Jerzmanowski (2006). The results show that the quality of institutions is an important determinant of longterm growth while the level of investment has a distinct role: it contributes positively in countries with good quality of institutions and have almost no impact for countries with bad or median institutions.
|
109 |
Eseje ve finanční ekonometrii / Essays in Financial EconometricsAvdulaj, Krenar January 2016 (has links)
vi Abstract Proper understanding of the dependence between assets is a crucial ingredient for a number of portfolio and risk management tasks. While the research in this area has been lively for decades, the recent financial crisis of 2007-2008 reminded us that we might not understand the dependence properly. This crisis served as catalyst for boosting the demand for models capturing the dependence structures. Reminded by this urgent call, literature is responding by moving to nonlinear de- pendence models resembling the dependence structures observed in the data. In my dissertation, I contribute to this surge with three papers in financial econo- metrics, focusing on nonlinear dependence in financial time series from different perspectives. I propose a new empirical model which allows capturing and forecasting the conditional time-varying joint distribution of the oil - stocks pair accurately. Em- ploying a recently proposed conditional diversification benefits measure that con- siders higher-order moments and nonlinear dependence from tail events, I docu- ment decreasing benefits from diversification over the past ten years. The diver- sification benefits implied by my empirical model are, moreover, strongly varied over time. These findings have important implications for asset allocation, as the benefits of...
|
110 |
Macroeconomic models with endogenous learningGaus, Eric 06 1900 (has links)
xi, 87 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number. / The behavior of the macroeconomy and monetary policy is heavily influenced by expectations. Recent research has explored how minor changes in expectation formation can change the stability properties of a model. One common way to alter expectation formation involves agents' use of econometrics to form forecasting equations. Agents update their forecasts based on new information that arises as the economy progresses through time. In this way agents "learn" about the economy.
Previous learning literature mostly focuses on agents using a fixed data size or increasing the amount of data they use. My research explores how agents might endogenously change the amount of data they use to update their forecast equations.
My first chapter explores how an established endogenous learning algorithm, proposed by Marcet and Nicolini, may influence monetary policy decisions. Under rational expectations (RE) determinacy serves as the main criterion for favoring a model or monetary policy rule. A determinant model need not result in stability under an alternative expectation formation process called learning. Researchers appeal to stability under learning as a criterion for monetary policy rule selection.
This chapter provides a cautionary tale for policy makers and reinforces the importance of the role of expectations. Simulations appear stable for a prolonged interval of time but may suddenly deviate from the RE solution. This exotic behavior exhibits significantly higher volatility relative to RE yet over long simulations remains true to the RE equilibrium.
In the second chapter I address the effectiveness of endogenous gain learning algorithms in the presence of occasional structural breaks. Marcet and Nicolini's algorithm relies on agents reacting to forecast errors. I propose an alternative, which relies on agents using statistical information.
The third chapter uses standard macroeconomic data to find out whether a model that has non-rational expectations can outperform RE. I answer this question affirmatively and explore what learning means to the economy. In addition, I conduct a Monte Carlo exercise to investigate whether a simple learning model does, empirically, imbed an RE model. While theoretically a very small constant gain implies RE, empirically learning creates bias in coefficient estimates. / Committee in charge: George Evans, Co-Chairperson, Economics;
Jeremy Piger, Co-Chairperson, Economics;
Shankha Chakraborty, Member, Economics;
Sergio Koreisha, Outside Member, Decision Sciences
|
Page generated in 0.0639 seconds