• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 10
  • 10
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Composite and Cascaded Generalized-K Fading Channel Modeling and Their Diversity and Performance Analysis

Ansari, Imran Shafique 12 1900 (has links)
The introduction of new schemes that are based on the communication among nodes has motivated the use of composite fading models due to the fact that the nodes experience different multipath fading and shadowing statistics, which subsequently determines the required statistics for the performance analysis of different transceivers. The end-to-end signal-to-noise-ratio (SNR) statistics plays an essential role in the determination of the performance of cascaded digital communication systems. In this thesis, a closed-form expression for the probability density function (PDF) of the end-end SNR for independent but not necessarily identically distributed (i.n.i.d.) cascaded generalized-K (GK) composite fading channels is derived. The developed PDF expression in terms of the Meijer-G function allows the derivation of subsequent performance metrics, applicable to different modulation schemes, including outage probability, bit error rate for coherent as well as non-coherent systems, and average channel capacity that provides insights into the performance of a digital communication system operating in N cascaded GK composite fading environment. Another line of research that was motivated by the introduction of composite fading channels is the error performance. Error performance is one of the main performance measures and derivation of its closed-form expression has proved to be quite involved for certain systems. Hence, in this thesis, a unified closed-form expression, applicable to different binary modulation schemes, for the bit error rate of dual-branch selection diversity based systems undergoing i.n.i.d. GK fading is derived in terms of the extended generalized bivariate Meijer G-function.
2

Resonance sums for Rankin-Selberg products

Czarnecki, Kyle Jeffrey 01 May 2016 (has links)
Consider either (i) f = f1 ⊠ f2 for two Maass cusp forms for SLm(ℤ) and SLm′(ℤ), respectively, with 2 ≤ m ≤ m′, or (ii) f= f1 ⊠ f2 ⊠ f3 for three weight 2k holomorphic cusp forms for SL2(ℤ). Let λf(n) be the normalized coefficients of the associated L-function L(s, f), which is either (i) the Rankin-Selberg L-function L(s, f1 ×f2), or (ii) the Rankin triple product L-function L(s, f1 ×f2 ×f3). First, we derive a Voronoi-type summation formula for λf (n) involving the Meijer G-function. As an application we obtain the asymptotics for the smoothly weighted average of λf (n) against e(αnβ), i.e. the asymptotics for the associated resonance sums. Let ℓ be the degree of L(s, f). When β = 1/ℓ and α is close or equal to ±ℓq 1/ℓ for a positive integer q, the average has a main term of size |λf (q)|X 1/2ℓ+1/2 . Otherwise, when α is fixed and 0 < β < 1/ℓ it is shown that this average decays rapidly. Similar results have been established for individual SLm(ℤ) automorphic cusp forms and are due to the oscillatory nature of the coefficients λf (n).
3

Distances to a Point of Reference in Spatial Point Patterns

Joyner, Michele L., Seier, Edith, Jones, Thomas C. 01 November 2014 (has links)
Motivated by a study of social spider behavior, we discuss the distribution of the distances from all the events in a spatial point pattern to a point of reference that has a known location at a given moment of time. The distribution depends on both the shape of the region and the location of the point of reference. The empirical CDF is used to describe the distribution of the distances and compare it to the CDF derived under complete spatial randomness. Empirical distributions are then compared through time focusing on the case in which the point of reference changes with time.
4

Analysis of borehole heat exchanger in an existing ground-source heat pump installation

Derouet, Marc January 2014 (has links)
Ground-source heat pumps systems (GSHP) are commonly used all over Sweden to supply heat and sometimes cool to different kinds of housings or commercial facilities. Many large installations are by now between 10 and 20 years old. Even when the design of such system has been tackled, rare are the studies that have dealt with following their performance throughout time in detail. Based on conductive heat transfer, the heat extraction process makes the ground temperature decrease when installations are only used for heating. This thesis aims at proposing a method to evaluate how the temperature in a borehole heat exchanger of a GSHP will evolve. The project is focusing on the heat transfer from the ground to the boreholes modelled using Finite Line Source (FLS) based generated g-functions. “g-functions” are non-dimensional parameters characterizing the evolution of the ground thermal resistance enduring variable heat extraction loads. A model using Matlab has been developed and validated against relevant publications. As a case study, the method is applied to an existing 15 years old GSHP installation, composed of 26 boreholes and 3 heat pumps, so as to compare the obtained results with data measured on site. Two sub-borehole fields compose this installation: 14 of them were drilled in 1998 and the remaining 12 in 2009. Measured variable heat extraction loads were superposed using dedicated site g-functions for the two boreholes fields. As a result, a comparison between modelled and calculated heat carrier fluid in the boreholes over the last 6 months is presented here, as well as a 20 years forecast of the ground temperature at the interface with the boreholes.
5

Understanding numerically generated g-functions: A study case for a 6x6 borehole field

Perez Gonzalez, Jesus Angel January 2013 (has links)
The Ground Source Heat Pump systems (GSHP) are an emerging technology used to exchange heat with the ground through the use of some buried heat exchangers. The thermal response of a borehole field can be characterized by its g-function. It is a non-dimensional temperature response factor, which can be calculated using either numerical or analytical solutions. Eskilson developed the first study made for the calculation of these g-functionts. Lamarche and Beauchamp proposed another analytical approach based on the Finite Line Source (FLS). Generally, both solutions present similar results with some small differences. They could be attributed to the boundary condition performed in both researches: the FLS solution considers uniform heat flux along the borehole wall in all the heat exchangers, while Eskilson’s model defines as a condition, uniform temperature at the borehole wall within all the pipes in the field. In this Master of Science Thesis, the temperature response factors (g-functions) of a 6x6 borehole field with 36 heat exchangers (BHE) arranged in a squared configuration are obtained from new numerical models, mainly based on the use of a highly conductive material composing the BHE. For this purpose, a commercial software called Comsol Multyphisics© is employed. The aim of this thesis is to get larger knowledge in generating the g-function in relation to the boundary condition performed in the model trying to reach better approximations to the reality. Some strategies with respect to the geometry, size of the model and mesh are performed to reduce the computing time. The influence of the geothermal heat flux and the influence of the highly conductive material (HCM) composing the BHEs are also studied in our model. Going further, the thermal behavior of the ground is also studied by imposing variable heating and cooling loads during seasonal periods over a time of 25 years. Finally, the g-functions obtained from our numerical models are compared to the one generated with the commercial software, Earth Energy Design (EED), which represents the numerical solution proposed by Eskilson, and the one generated with FLS approach. The results may explain in a closer approximation to the reality the thermal response for large borehole fields.
6

The development of the quaternion normal distribution

Loots, Mattheus Theodor 27 June 2011 (has links)
In this dissertation an overview on the real representation of quaternions in distribution theory is given. The density functions of the p-variate and matrix-variate quaternion normal distributions are derived from first principles, while that of the quaternion Wishart distribution is derived from the real associated Wishart distribution via the characteristic function. Applications of this theory in hypothesis testing is presented, and the density function of Wilks's statistic is derived for quaternion Wishart matrices. / Dissertation (MSc)--University of Pretoria, 2010. / Statistics / unrestricted
7

Temperaturzoner för lagring av värmeenergi i cirkulärt borrhålsfält / Temperature stratification of borehole thermal energy storages

Penttilä, Jens January 2013 (has links)
The thermal response of a borehole field is often described by non‐dimensional response factors called gfunctions.The g‐function was firstly generated as a numerical solution based on SBM (Superposition BoreholeModel). An analytical approach, the FLS (Finite Line Source), is also accepted for generating the g‐function. In thiswork the potential to numerically produce g‐functions is studied for circular borehole fields using the commercialsoftware COMSOL. The numerical method is flexible and allows the generation of g‐functions for any boreholefield geometry. The approach is partially validated by comparing the solution for a square borehole field containing36 boreholes (6x6) with g‐functions generated with the FLS approach and with the program EED (Earth EnergyDesigner). The latter is based on Eskilsons SBM, one of the first documents where the concept of g‐functions wasintroduced. Once the approach is validated, the square COMSOL model is compared with a circular geometryborehole field developed by the same method, consisting of 3 concentric rings having 6, 12, and 18 boreholes.Finally the influence on the circular geometry g‐function is studied when connecting the boreholes in radial zoneswith different thermal loads. / Den termiska responsen för ett borrhålsfält beskrivs ofta med den dimensionslösa responsfunktionen kallad gfunktion.Responsfunktionen togs först fram som en numerisk lösning med SBM (Superposition Borehole Model).En analytisk metod, FLS (Finite Line Source) är också accepterad för framtagandet av g‐funktioner. I det här arbetetundersöks förutsättningarna att numeriskt ta fram g‐funktioner för cirkulära borrhålsfält genom att använda detkommersiella simuleringsprogrammet COMSOL Multiphysics. Den numeriska metoden är flexibel och kananvändas för alla typer av borrhålsgeometrier. Metoden att använda COMSOL valideras delvis genom att jämföraresultatet för ett kvadratiskt borrhålsfält innehållande 36 borrhål (6x6) med lösningar framtagna med FLS och meddimensioneringsprogrammet EED (Earth Energy Designer). Det senare har sin grund i Eskilsons SBM, ett av deförsta arbeten där begreppet g‐funktion introducerades. När metoden att använda COMSOL verifierats, jämförsden kvadratiska borrhålsmodellen med en cirkulär borrhålskonfiguration, upprättad med samma metod,innehållande 3 koncentriska ringar om vardera 6, 12, 18 borrhål. Slutligen undersöks hur den termiska responsenpåverkas då borrhålen i ett cirkulärt borrhålsfält kopplas samman och grupperas i radiella zoner med olika termiskalaster. / SEEC Scandinavian Energy Efficiency Co.
8

Numerical analysis and multi-precision computational methods applied to the extant problems of Asian option pricing and simulating stable distributions and unit root densities

Cao, Liang January 2014 (has links)
This thesis considers new methods that exploit recent developments in computer technology to address three extant problems in the area of Finance and Econometrics. The problem of Asian option pricing has endured for the last two decades in spite of many attempts to find a robust solution across all parameter values. All recently proposed methods are shown to fail when computations are conducted using standard machine precision because as more and more accuracy is forced upon the problem, round-off error begins to propagate. Using recent methods from numerical analysis based on multi-precision arithmetic, we show using the Mathematica platform that all extant methods have efficacy when computations use sufficient arithmetic precision. This creates the proper framework to compare and contrast the methods based on criteria such as computational speed for a given accuracy. Numerical methods based on a deformation of the Bromwich contour in the Geman-Yor Laplace transform are found to perform best provided the normalized strike price is above a given threshold; otherwise methods based on Euler approximation are preferred. The same methods are applied in two other contexts: the simulation of stable distributions and the computation of unit root densities in Econometrics. The stable densities are all nested in a general function called a Fox H function. The same computational difficulties as above apply when using only double-precision arithmetic but are again solved using higher arithmetic precision. We also consider simulating the densities of infinitely divisible distributions associated with hyperbolic functions. Finally, our methods are applied to unit root densities. Focusing on the two fundamental densities, we show our methods perform favorably against the extant methods of Monte Carlo simulation, the Imhof algorithm and some analytical expressions derived principally by Abadir. Using Mathematica, the main two-dimensional Laplace transform in this context is reduced to a one-dimensional problem.
9

Análise de carteiras em tempo discreto / Discrete time portfolio analysis

Kato, Fernando Hideki 14 April 2004 (has links)
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados. / In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.
10

Análise de carteiras em tempo discreto / Discrete time portfolio analysis

Fernando Hideki Kato 14 April 2004 (has links)
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados. / In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.

Page generated in 0.0805 seconds