231 |
Autoregressive Conditional DensityLindberg, Jacob January 2016 (has links)
We compare two time series models: an ARMA(1,1)-ACD(1,1)-NIG model against an ARMA(1,1)-GARCH(1,1)-NIG model. Their out-of-sample performance is of interest rather than their in-sample properties. The models produce one-day ahead forecasts which are evaluated using three statistical tests: VaR-test, VaRdur-test and Berkowitz-test. All three tests are concerned with the the tail events, since our time series models are often used to estimate downside risk. When the two models are applied to data on Canadian stock market returns, our three statistical tests point in the direction that the ACD model and the GARCH model perform similarly. The difference between the models is small. We finish with comments on the model uncertainty inherit in the comparison.
|
232 |
Strategies, Methods and Tools for Solving Long-term Transmission Expansion Planning in Large-scale Power SystemsFitiwi, Desta Zahlay January 2016 (has links)
Driven by a number of factors, the electric power industry is expected to undergo a paradigm shift with a considerably increased level of variable energy sources. A significant integration of such sources requires heavy transmission investments over geographically wide and large-scale networks. However, the stochastic nature of such sources, along with the sheer size of network systems, results in problems that may become intractable. Thus, the challenge addressed in this work is to design efficient and reasonably accurate models, strategies and tools that can solve large-scale TEP problems under uncertainty. A long-term stochastic network planning tool is developed, considering a multi-stage decision framework and a high level integration of renewables. Such a tool combines the need for short-term decisions with the evaluation of long-term scenarios, which is the practical essence of a real-world planning. Furthermore, in order to significantly reduce the combinatorial solution search space, a specific heuristic solution strategy is devised. This works by decomposing the original problem into successive optimization phases.One of the modeling challenges addressed in this work is to select the right network model for power flow and congestion evaluation: complex enough to capture the relevant features but simple enough to be computationally fast. Another relevant contribution is a domain-driven clustering process of snapshots which is based on a “moments” technique. Finally, the developed models, methods and solution strategies have been tested on standard and real-life systems. This thesis also presents numerical results of an aggregated 1060-node European network system considering multiple RES development scenarios. Generally, test results show the effectiveness of the proposed TEP model, since—as originally intended—it contributes to a significant reduction in computational effort while fairly maintaining optimality of the solutions. / Driven by several techno-economic, environmental and structural factors, the electric energy industry is expected to undergo a paradigm shift with a considerably increased level of renewables (mainly variable energy sources such as wind and solar), gradually replacing conventional power production sources. The scale and the speed of integrating such sources of energy are of paramount importance to effectively address a multitude of global and local concerns such as climate change, sustainability and energy security. In recent years, wind and solar power have been attracting large-scale investments in many countries, especially in Europe. The favorable agreements of states to curb greenhouse gas emissions and mitigate climate change, along with other driving factors, will further accelerate the renewable integration in power systems. Renewable energy sources (RESs), wind and solar in particular, are abundant almost everywhere, although their energy intensities differ very much from one place to another. Because of this, a significant integration of such energy sources requires heavy investments in transmission infrastructures. In other words, transmission expansion planning (TEP) has to be carried out in geographically wide and large-scale networks. This helps to effectively accommodate the RESs and optimally exploit their benefits while minimizing their side effects. However, the uncertain nature of most of the renewable sources, along with the size of the network systems, results in optimization problems that may become intractable in practice or require a huge computational effort. Thus, the challenge addressed in this work is to design models, strategies and tools that may solve large-scale and uncertain TEP problems, being computationally efficient and reasonably accurate. Of course, the specific definition of the term “reasonably accurate” is the key issue of the thesis work, since it requires a deep understanding of the main cost and technical drivers of adequate TEP investment decisions. A new formulation is proposed in this dissertation for a long-term planning of transmission investments under uncertainty, with a multi-stage decision framework and considering a high level of renewable sources integration. This multi-stage strategy combines the need for short-term decisions with the evaluation of long-term scenarios, which is the practical essence of a real-world planning. The TEP problem is defined as a stochastic mixed-integer linear programming (S-MILP) optimization, an exact solution method. This allows the use of effective off-the-shelf solvers to obtain solutions within a reasonable computational time, enhancing overall problem tractability. Furthermore, in order to significantly reduce the combinatorial solution search (CSS) space, a specific heuristic solution strategy is devised. In this global heuristic strategy, the problem is decomposed into successive optimization phases. Each phase uses more complex optimization models than the previous one, and uses the results of the previous phase so that the combinatorial solution search space is reduced after each phase. Moreover, each optimization phase is defined and solved as an independent problem; thus, allowing the use of specific decomposition techniques, or parallel computation when possible. A relevant feature of the solution strategy is that it combines deterministic and stochastic modeling techniques on a multi-stage modeling framework with a rolling-window planning concept. The planning horizon is divided into two sub-horizons: medium- and long-term, both having multiple decision stages. The first sub-horizon is characterized by a set of investments, which are good enough for all scenarios, in each stage while scenario-dependent decisions are made in the second sub-horizon. One of the first modeling challenges of this work is to select the right network model for power flow and congestion evaluation: complex enough to capture the relevant features but simple enough to be computationally fast. The thesis includes extensive analysis of existing and improved network models such as AC, linearized AC, “DC”, hybrid and pipeline models, both for the existing and the candidate lines. Finally, a DC network model is proposed as the most suitable option. This work also analyzes alternative losses models. Some of them are already available and others are proposed as original contributions of the thesis. These models are evaluated in the context of the target problem, i.e., in finding the right balance between accuracy and computational effort in a large-scale TEP problem subject to significant RES integration. It has to be pointed out that, although losses are usually neglected in TEP studies because of computational limitations, they are critical in network expansion decisions. In fact, using inadequate models may lead not only to cost-estimation errors, but also to technical errors such as the so-called “artificial losses”. Another relevant contribution of this work is a domain-driven clustering process to handle operational states. This allows a more compact and efficient representation of uncertainty with little loss of accuracy. This is relevant because, together with electricity demand and other traditional sources of uncertainty, the integration of variable energy sources introduces an additional operational variability and uncertainty. A substantial part of this uncertainty and variability is often handled by a set of operational states, here referred to as “snapshots”, which are generation-demand patterns of power systems that lead to optimal power flow (OPF) patterns in the transmission network. A large set of snapshots, each one with an estimated probability, is then used to evaluate and optimize the network expansion. In a long-term TEP problem of large networks, the number of operational states must be reduced. Hence, from a methodological perspective, this thesis shows how the snapshot reduction can be achieved by means of clustering, without relevant loss of accuracy, provided that a good selection of classification variables is used in the clustering process. The proposed method relies on two ideas. First, the snapshots are characterized by their OPF patterns (the effects) instead of the generation-demand patterns (the causes). This is simply because the network expansion is the target problem, and losses and congestions are the drivers to network investments. Second, the OPF patterns are classified using a “moments” technique, a well-known approach in Optical Pattern Recognition problems. The developed models, methods and solution strategies have been tested on small-, medium- and large-scale network systems. This thesis also presents numerical results of an aggregated 1060-node European network system obtained considering multiple RES development scenarios. Generally, test results show the effectiveness of the proposed TEP model, since—as originally intended—it contributes to a significant reduction in computational effort while fairly maintaining optimality of the solutions. / <p>QC 20160919</p>
|
233 |
Rehaussement et détection des attributs sismiques 3D par techniques avancées d'analyse d'images / 3D Seismic Attributes Enhancement and Detection by Advanced Technology of Image AnalysisLi, Gengxiang 19 April 2012 (has links)
Les Moments ont été largement utilisés dans la reconnaissance de formes et dans le traitement d'image. Dans cette thèse, nous concentrons notre attention sur les 3D moments orthogonaux de Gauss-Hermite, les moments invariants 2D et 3D de Gauss-Hermite, l'algorithme rapide de l'attribut de cohérence et les applications de l'interprétation sismique en utilisant la méthode des moments.Nous étudions les méthodes de suivi automatique d'horizon sismique à partir de moments de Gauss-Hermite en cas de 1D et de 3D. Nous introduisons une approche basée sur une étude multi-échelle des moments invariants. Les résultats expérimentaux montrent que la méthode des moments 3D de Gauss-Hermite est plus performante que les autres algorithmes populaires.Nous avons également abordé l'analyse des faciès sismiques basée sur les caractéristiques du vecteur à partir des moments 3D de Gauss -Hermite, et la méthode de Cartes Auto-organisatrices avec techniques de visualisation de données. L'excellent résultat de l'analyse des faciès montre que l'environnement intégré donne une meilleure performance dans l'interprétation de la structure des clusters.Enfin, nous introduisons le traitement parallèle et la visualisation de volume. En profitant des nouvelles performances par les technologies multi-threading et multi-cœurs dans le traitement et l'interprétation de données sismiques, nous calculons efficacement des attributs sismiques et nous suivons l'horizon. Nous discutons également l'algorithme de rendu de volume basé sur le moteur Open-Scene-Graph qui permet de mieux comprendre la structure de données sismiques. / Moments have been extensively used in pattern recognition and image processing. In this thesis, we focus our attention on the study of 3D orthogonal Gaussian-Hermite moments, 2D and 3D Gaussian-Hermite moment invariants, fast algorithm of coherency attribute, and applications of seismic interpretation using moments methodology.We conduct seismic horizon auto-tracking methods from Gaussian-Hermite moments and moment invariants. We introduce multi-scale moment invariants approach. The experimental results show that method of 3D Gaussian-Hermite moments performs better than the most popular methods.We also approach seismic facies analysis based on feature vectors from 3D Gaussian-Hermite moments, and Self-Organizing Maps method with data visualization techniques. The excellent result shows that the integrated environment gives the best performance in interpreting the correct cluster structure.Finally, we introduce the parallel processing and volume visualization. Taking advantage of new performances by multi-threading and multi-cores technologies into seismic interpretation, we efficiently compute the seismic attributes and track the horizon. We also discuss volume rendering algorithm based on Open-Scene-Graph engine which provides better insight into the structure of seismic data.
|
234 |
Jsou realizované momenty užitečné pro analýzu výnosů akcií? / Are realized moments useful for stock market returns analysis?Saktor, Ira January 2019 (has links)
This thesis analyzes the use of realized moments in asset pricing. The analysis is done using dataset containing log-returns for 29 of the most traded stocks and covering 10 years of data. The dataset is split into training set covering 7 years and test set covering 3 years of data. For each of the stocks a separate time series model is estimated. In evaluation of the quality of the models, metrics such as RMSE, MAD, accuracy in forecasting the sign of future returns, and returns achievable by executing trades based on the recommendations from the model are used. Even though the inclusion of realized moments does not provide significant improvements in terms of RMSE, it is found that realized skewness and kurtosis significantly contribute to explaining the returns of individual stocks as they lead to consistent improvements in identifying future positive, as well as negative, returns. Moreover, the recommendations from the models using realized moments can help us achieve significantly higher returns from trading stocks. Inclusion of the interaction terms for variance and returns, skewness and returns, and kurtosis and variance, provides additional improvement of forecasting accuracy, as well as improvements in returns achievable by executing transactions based on recommendations from the model....
|
235 |
Essays in Empirical Asset PricingChiang, I-Hsuan Ethan January 2009 (has links)
Thesis advisor: Pierluigi Balduzzi / This dissertation consists of two essays in empirical asset pricing. Chapter I, "Skewness and Co-skewness in Bond Returns," explores skewness and co-skewness in discrete-horizon bond returns. Using data for 1976-2005, we find bond skewness is comparable to that in equities, varies with the holding period and varies over time. Speculative-grade bonds and collateralized securities have substantial negative skewness. The sign of the price of co-skewness risk in fixed income market is in general consistent with the theoretical prediction of the three-moment CAPM. Co-skewness against the market portfolio is priced differently in various bond sectors: taking a unit of co-skewness risk is rewarded with 0.43% and 2.47% per month for corporate bonds and collateralized securities, respectively. Co-skewness risk helps explain the cross section of expected bond returns when state variables such as inflation, real activity, or short term interest rates are included, or when conditioning information is exploited. Chapter II, "Modern Portfolio Management with Conditioning Information," studies models in which active portfolio managers optimize performance relative to a benchmark and utilize conditioning information unavailable to their clients. We provide explicit solutions for the optimal strategies with multiple risky assets, with or without a risk free asset, and also consider various constraints on portfolio risk or on portfolio weights. The equilibrium implications of the models are discussed. A currency portfolio example shows that the optimal solutions improve the measured performance by 53% out of sample, compared with portfolios ignoring conditioning information. / Thesis (PhD) — Boston College, 2009. / Submitted to: Boston College. Carroll School of Management. / Discipline: Finance.
|
236 |
Two Essays in EconomicsShevyakhova, Elizaveta January 2009 (has links)
Thesis advisor: Arthur Lewbel / The thesis includes two essays. The first essay, Inequality Moments in Estimation of Discrete Games with Incomplete Information and Multiple Equilibria, develops a method for estimation of static discrete games with incomplete information, which delivers consistent estimates of parameters even when games have multiple equilibria. Every Bayes-Nash equilibrium in a discrete game of incomplete information is associated with a set of choice probabilities. I use maximum and minimum equilibrium choice probabilities as upper and lower bounds on empirical choice probabilities to construct moment inequalities. In general, estimation with moment inequalities results in partial identification. I show that point identification is achievable if the payoffs are functions of a sufficient number of explanatory variables with a real line domain and outcome-specific coefficients associated with them. The second essay, Tenancy Rent Control and Credible Commitment in Maintenance, co-authored with Richard Arnott, investigates the effect of tenancy rent control on maintenance and welfare. Under tenancy rent control, rents are regulated within a tenancy but not between tenancies. The essay analyzes the effects of tenancy rent control on housing quality, maintenance, and rehabilitation. Since the discounted revenue received over a fixed-duration tenancy depends only on the starting rent, intuitively the landlord has an incentive to spruce up the unit between tenancies in order to show it well, but little incentive to maintain the unit well during the tenancy. The essay formalizes this intuition, and presents numerical examples illustrating the efficiency loss from this effect. / Thesis (PhD) — Boston College, 2009. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
|
237 |
Dinâmica de plasma e fônon e emissão de radiação terahertz em superfícies de GaAs e telúrio excitadas por pulsos ultracurtos / Plasma-phonon dynamics and terahertz emission in GaAs and Te Surfaces excited via ultrafast pulsesSouza, Fabricio Macedo de 10 April 2000 (has links)
Após a excitação de uma amostra semicondutora por um pulso ultracurto, os fotoporadores interagem com a rede excitando modos longitudinais ópticos. Essa interação provoca variações no índice de refração do material, produzindo modulações na resposta óptica do meio (efeito eletro-óptico). Por outro lado, esta dinâmica origina polarizações dependentes do tempo o que gera emissão de radiação terahertz. Experimentos recentes (pump-probe) observaram modulações do campo através de medidas da refletividade resolvidas no tempo. A refletividade e o campo estão relacionados segundo o efeito eletro-óptico. Também se resolve temporalmente o campo irradiado pela amostra, através de antenas que operam na faixa de terahertz. Tanto as medidas eletro-ópticas quanto de emissão terahertz fornecem informações sobre a interação dinâmica do plasma com a rede após a excitação óptica. Nesse trabalho simulamos a interação dinâmica de plasma e fônons em n-GaAs e Telúrio (\"bulk\") após estes serem excitados por um pulso ultracurto. Utilizamos equações hidrodinâmicas para descrever transporte de cargas e uma equação fenomenológica de oscilador harmônico forçado, para descrever oscilações longitudinais ópticas da rede. Complementando nossa descrição temos a equação de Poisson, com a qual calculamos o campo gerado pelo plasma e pela polarização da rede semicondutora. Essas equações constituem um sistema de seis equações diferencias (quatro parciais) acopladas. Para resolvê-las utilizamos o método das diferenças finitas. Do cálculo numérico obtemos a evolução temporal do campo elétrico no interior do material. Com esse campo determinamos as freqüências de oscilação do sistema e calculamos o campo irradiado. Nossos resultados apresentam acordo qualitativo com os experimentos / Above-band-gap optical excitation of semiconductors generates highly non-equilibrium photocarriers which interact with phonons thus exciting vibrational modes in the system. This interaction induces refractive-index changes via the electro-optic effect. Moreover it gives rise to electromagnetic radiation at characteristic frequencies (terahertz). Both effects have been measured by time-resolved ultra fast spectroscopy. Recent pump-probe experiments have found strong modulations of the internal electric field through electro-optic measurements. The emitted electromagnetic radiation has also been detected by a terahertz dipole antenna. Both electro-optic and terahertz emission measurements provide information about the coupled dynamics of photocarriers and phonons. In this work we simulate the dynamics of plasmon-phonon coupled modes in n-GaAs and Tellurium (bulk) following ultrafast laser excitation. The time evolution of the photocarrier densities and currents is described semi classically in terms of the moments of the Boltzmann equation. Phonon effects are accounted for by considering a phenomenological driven-harmonic-oscillator equation, which is coupled to the electron-hole plasma via Poisson\'s equation. These equations constitute a coupled set of differential equations. We use finite differencing to solve these equations. From the numerical results for the evolution of internal fields we can calculate both the characteristic frequencies of system and its terahertz radiation spectrum. Our results are consistent with recent experimental data
|
238 |
Propriedades Magnéticas Locais de Grãos de Co em Cu e Ag / Local magnetic properties of Co grains in bulk Cu and AgNogueira, Renata Nascimento 09 November 1999 (has links)
A descoberta da magnetoresistência (GMR) em materiais granulares gerou um grande interesse no estudo destes materiais, havendo um empenho particular no estudo de grãos de CO em CU e AG. Como as propriedades de transporte estão ligadas intimamente às estruturais, o entendimento da GMR nestes materiais requer uma descrição acurada destas características. Neste trabalho, procurando determinar algumas características magnéticas locais, usamos o método RS-LMTO-ASA para realizar um estudo sistemático dos momentos magnéticos locais e campos hiperfinos com respeito ao tamanho de grãos de CI com até 135 átomos em matrizes de CU e AG fcc. Além disso, estudamos FE e CO nas configurações espaciais seguintes em hospedeiro AG: impurezas isoladas, dímeros FE-FE e FE-CO e precipitados com 13, 19 e 43 átomos. Calculamos o FE enquanto impureza central nos grãos de CO e, para os clusters com 13 e 19 átomos, também consideramos a impureza em posições de interface. Encontramos para os grãos em AG momentos magnéticos bastante estáveis e, para o CU, obtivemos uma ligeira dependência do momento magnético médio com o tamanho do grão. Nossos resultados mostram que há uma diferença significativa no comportamento de grãos e clusters livres. Para os campos hiperfinos, mostramos que este segue comportamentos semelhantes em todos os casos, tendo uma dependência sistemática com relação ao sítio / The discovery of giant magnetoresistance (GMR) in granular materiais generated a great interest in the study of these systems. Special attention has been devoted to Co grains inside Cu and Ag medium. As the transport properties are closely related to structural characteristics, an accurate description is required in order to understand the GMR behavior in these materiais. Here we use the Real Space-LMTO-ASA method to perform a systematic study of the site and grain size dependence of local magnetic moments and hyperfine fields at Co grains ( up to 135 atoms) in fcc Ag and Cu hosts. We have also studied Fe and Co atoms in different spatial configurations in Ag hosts: isolated impurities, Fe-Fe and Fe-Co dimmers and precipitates containing 13, 19 and 43 atoms. Special attention is given to the differences between central and interface positions of Fe atoms in the two smallest Co clusters. We found a very stable value for the local moment at Co atoms in Ag hosts whereas the average local moments for Co grains in Cu tend to be slight ly larger for larger grains. we show that free and embedded Co clusters have very different magnetic behavior. The hyperfine fields present similar values in both matrices and exhibit a systematic site dependence.
|
239 |
Avaliação e nova proposta de regionalização hidrológica para o Estado de São Paulo / Evaluation and new hydrologic regionalization proposal for the State of São PauloWolff, Wagner 06 February 2013 (has links)
A regionalização hidrológica é uma técnica que permite transferir informação entre bacias hidrográficas semelhantes, a fim de calcular em sítios que não dispõem de dados, as variáveis hidrológicas de interesse; assim, a mesma caracteriza-se por ser uma ferramenta útil na obtenção de outorga de direitos de uso de recursos hídricos, instrumento previsto na Lei 9433/97. Devido à desatualização do modelo atual de regionalização hidrológica do Estado de São Paulo, proposto na década de 80, este estudo tem como objetivo geral avaliar se o mesmo está adequado ao uso, de acordo com a atualização de seu banco de dados, e propor um novo que supere as limitações do antigo. O estudo foi realizado no Estado de São Paulo, que tem área de aproximadamente 248197 km², localizado entre as longitudes -44° 9\', e -53º 5\', e entre as latitudes -22° 40\', e -22° 39\'. Utilizou-se, inicialmente, dados de 176 estações fluviométricas administradas pelo DAEE e pela ANA, disponíveis em http://www.sigrh.sp.gov.br. Determinou-se para as estações, a precipitação média anual da bacia hidrográfica (P), a vazão média plurianual (Q), a vazão mínima média de 7 dias seguidos com período de retorno de 10 anos (Q7,10) e as vazões com 90 e 95% de permanência no tempo (Q90 e Q95). Posteriormente, fez-se análise de consistência excluindo as estações inconsistentes do estudo; assim, restaram 172 para serem utilizadas na avaliação do modelo e formulação de um novo. A avaliação do modelo fez-se pelo índice de confiança (c), que é definido pelo produto entre o coeficiente de correlação (r) e o índice de concordância (d), utilizando como valor de estimativa as vazões geradas pelo modelo, e como valor padrão as calculadas por intermédio das estações fluviométricas. Todas as vazões avaliadas foram classificadas como ótimas, com índice de confiança (c) acima de 0,85; assim, o atual modelo rejeitou a hipótese de que a atualização de seu banco de dados pudesse inferir em sua capacidade preditiva; portanto, o mesmo pode ser usado na obtenção das vazões estudadas que são referência na emissão de outorga em diferentes Estados do Brasil. Entretanto, o modelo apresentou algumas limitações, como extrapolação para áreas de bacias de drenagem menores do que as utilizadas para formulá-lo, e problemas em seu aplicativo computacional: o mesmo informa a precipitação média anual na coordenada geográfica do local de captação da água, e não da bacia de drenagem a montante do referido local. Neste enfoque, foi formulado um novo modelo, que superou as limitações e proporcionou capacidade preditiva maior que a do antigo. / A hydrological regionalization is a technique that allows to transfer information between similar watersheds in order to calculate, in sites where there are no data on the hydrological variables of interest. This technique becomes a useful tool to ensure the rights of water resources use, instrument provided by Law 9433/97. Due to the outdated hydrological regionalization model of São Paulo State, proposed in the 1980\'s, this study aims to broadly assess whether the current model is appropriate to use, according to an analysis of its update database and to propose a new model to overcome the limitations of the current one. The study was conducted in State São Paulo with area of approximately 248197 km ², located between longitudes -44 ° 9 \', and -53 ° 5\', and between latitudes 40 ° -22\' and -22 ° 39\'. We used data from 176 initially gauged stations administered by ANA and DAEE available at http://www.sigrh.sp.gov.br, where it was determined to the stations, the average annual rainfall of the basin (P) multiannual average streamflow (Q), streamflow minimum average of 7 consecutive days with a return period of 10 years (Q7,10) and streamflows with 90 and 95% of permanence in time(Q90 e Q95). Afterwards, we analyzed the consistency excluding the inconsistent stations from the study, thus, remaining 172 to be used in the model evaluation and development of a new model. The model evaluation was made by the confidence index (c), which is the product between the correlation coefficient (r) and the agreement index (d), using as estimate value the streamflows generated by the model and as the standard value, the streamflows calculated through the gauged stations. All streamflows evaluated were classified as optimal, with confidence index (c) above 0.85, therefore, the current model rejected the hypothesis that upgrading the database could infer its predictive ability, so, it can be used to obtain the streamflows studied that refer to use grants in different States of Brazil. However, the model had some limitations, such as extrapolation to areas of smaller watersheds than those used to formulate it, and computer application problems, being that, it reports the average annual precipitation at the geographic coordinate at the local catchment water, not the watershed upstream of that location. A new model was formulated that surpasses the limitations and provides greater predictive ability than the current one.
|
240 |
MoM modeling of metal-dielectric structures using volume integral equationsKulkarni, Shashank Dilip 06 May 2004 (has links)
Modeling of patch antennas and resonators on arbitrary dielectric substrates using surface RWG and volume edge based basis functions and the Method of Moments is implemented. The performance of the solver is studied for different mesh configurations. The results obtained are tested by comparison with experiments and Ansoft HFSS v9 simulator. The latter uses a large number of finite elements (up to 200K) and adaptive mesh refinement, thus providing the reliable data for comparison. The error in the resonant frequency is estimated for canonical resonator structures at different values of the relative dielectric constant ƒÕr, which ranges from 1 to 200. The reported results show a near perfect agreement in the estimation of resonant frequency for all the metal-dielectric resonators. Behavior of the antenna input impedance is tested, close to the first resonant frequency for the patch antenna. The error in the resonant frequency is estimated for different structures at different values of the relative dielectric constant ƒÕr, which ranges from 1 to 10. A larger error is observed in the calculation of the resonant frequency of the patch antenna. Moreover, this error increases with increase in the dielectric constant of the substrate. Further scope for improvement lies in the investigation of this effect.
|
Page generated in 0.0524 seconds