• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 13
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 76
  • 32
  • 16
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Methods for handling missing data due to a limit of detection in longitudinal lognormal data

Dick, Nicole Marie January 1900 (has links)
Master of Science / Department of Statistics / Suzanne Dubnicka / In animal science, challenge model studies often produce longitudinal data. Many times the lognormal distribution is useful in modeling the data at each time point. Escherichia coli O157 (E. coli O157) studies measure and record the concentration of colonies of the bacteria. There are times when the concentration of colonies present is too low, falling below a limit of detection. In these cases a zero is recorded for the concentration. Researchers employ a method of enrichment to determine if E. coli O157 was truly not present. This enrichment process searches for bacteria colony concentrations a second time to confirm or refute the previous measurement. If enrichment comes back without evidence of any bacteria colonies present, a zero remains as the observed concentration. If enrichment comes back with presence of bacteria colonies, a minimum value is imputed for the concentration. At the conclusion of the study the data are log10-transformed. One problem with the transformation is that the log of zero is mathematically undefined, so any observed concentrations still recorded as a zero after enrichment can not be log-transformed. Current practice carries the zero value from the lognormal data to the normal data. The purpose of this report is to evaluate methods for handling missing data due to a limit of detection and to provide results for various analyses of the longitudinal data. Multiple methods of imputing a value for the missing data are compared. Each method is analyzed by fitting three different models using SAS. To determine which method is most accurately explaining the data, a simulation study was conducted.
52

Continuum and molecular dynamics analyses of lubricant evaporation and flow due to laser heating in heat-assisted magnetic recording

Haq, Mohammad Ashraful 14 September 2018 (has links)
No description available.
53

Currents Induced on Wired I.T. Networks by Randomly Distributed Mobile Phones - A Computational Study

Excell, Peter S., Abd-Alhameed, Raed, Vaul, John A. January 2006 (has links)
No / The probability density and exceedance probability functions of the induced currents in a screened cable connecting two enclosures, resulting from the close. presence of single and multiple mobile phones working at 900 MHz, are investigated. The analysis of the problem is undertaken using the Method of Moments, but due to weak coupling, the impedance matrix was modified to reduce the memory and time requirements for the problem, to enable it to be executed multiple times. The empirical probability distribution functions (PDFs) and exceedance probabilities for the induced currents are presented. The form of the PDFs is seen to be quite well approximated by a log-normal distribution for a single source and by a Weibull distribution for multiple sources
54

Metody MCMC pro finanční časové řady / MCMC methods for financial time series

Tritová, Hana January 2016 (has links)
This thesis focuses on estimating parameters of appropriate model for daily returns using the Markov Chain Monte Carlo method (MCMC) and Bayesian statistics. We describe MCMC methods, such as Gibbs sampling and Metropolis- Hastings algorithm and their basic properties. After that, we introduce different financial models. Particularly we focus on the lognormal autoregressive model. Later we theoretically apply Gibbs sampling to lognormal autoregressive model using principles of Bayesian statistics. Afterwards, we analyze procedu- res, that we used in simulations of posterior distribution using Gibbs sampling. Finally, we present processed output of both simulated and real data analysis.
55

Parametric, Non-Parametric And Statistical Modeling Of Stony Coral Reef Data

Hoare, Armando 08 April 2008 (has links)
Like coral reefs worldwide, the Florida Reef Tract has dramatically declined within the past two decades. Monitoring of 40 sites throughout the Florida Keys National Marine Sanctuary has undertaken a multiple-parameter approach to assess spatial and temporal changes in the status of the ecosystem. The objectives of the present study consist of the following: In chapter one, we review past coral reef studies; emphasis is placed on recent studies on the stony corals of reefs in the lower Florida Keys. We also review the economic impact of coral reefs on the state of Florida. In chapter two, we identify the underlying probability distribution function of the stony coral cover proportions and we obtain better estimates of the statistical properties of stony coral cover proportions. Furthermore, we improve present procedures in constructing confidence intervals of the true median and mean for the underlying probability distribution. In chapter three, we investigate the applicability of the normal probability distribution assumption made on the pseudovalues obtained from the jackknife procedure for the Shannon-Wiener diversity index used in previous studies. We investigate a new and more effective approach to estimating the Shannon-Wiener and Simpson's diversity index. In chapter four, we develop the best possible estimate of the probability distribution function of the jackknifing pseudovalues, obtained from the jackknife procedure for the Shannon-Wiener diversity index used in previous studies, using the xi nonparametric kernel density estimate method. This nonparametric procedure gives very effective estimates of the statistical measures for the jackknifing pseudovalues. Lastly, the present study develops a predictive statistical model for stony coral cover. In addition to identifying the attributable variables that influence the stony coral cover data of the lower Florida Keys, we investigate the possible interactions present. The final form of the developed statistical model gives good estimates of the stony coral cover given some information of the attributable variables. Our nonparametric and parametric approach to analyzing coral reef data provides a sound basis for developing efficient ecosystem models that estimate future trends in coral reef diversity. This will give the scientists and managers another tool to help monitor and maintain a healthy ecosystem.
56

Comparing Approximations for Risk Measures Related to Sums of Correlated Lognormal Random Variables

Karniychuk, Maryna 09 January 2007 (has links) (PDF)
In this thesis the performances of different approximations are compared for a standard actuarial and financial problem: the estimation of quantiles and conditional tail expectations of the final value of a series of discrete cash flows. To calculate the risk measures such as quantiles and Conditional Tail Expectations, one needs the distribution function of the final wealth. The final value of a series of discrete payments in the considered model is the sum of dependent lognormal random variables. Unfortunately, its distribution function cannot be determined analytically. Thus usually one has to use time-consuming Monte Carlo simulations. Computational time still remains a serious drawback of Monte Carlo simulations, thus several analytical techniques for approximating the distribution function of final wealth are proposed in the frame of this thesis. These are the widely used moment-matching approximations and innovative comonotonic approximations. Moment-matching methods approximate the unknown distribution function by a given one in such a way that some characteristics (in the present case the first two moments) coincide. The ideas of two well-known approximations are described briefly. Analytical formulas for valuing quantiles and Conditional Tail Expectations are derived for both approximations. Recently, a large group of scientists from Catholic University Leuven in Belgium has derived comonotonic upper and comonotonic lower bounds for sums of dependent lognormal random variables. These bounds are bounds in the terms of "convex order". In order to provide the theoretical background for comonotonic approximations several fundamental ordering concepts such as stochastic dominance, stop-loss and convex order and some important relations between them are introduced. The last two concepts are closely related. Both stochastic orders express which of two random variables is the "less dangerous/more attractive" one. The central idea of comonotonic upper bound approximation is to replace the original sum, presenting final wealth, by a new sum, for which the components have the same marginal distributions as the components in the original sum, but with "more dangerous/less attractive" dependence structure. The upper bound, or saying mathematically, convex largest sum is obtained when the components of the sum are the components of comonotonic random vector. Therefore, fundamental concepts of comonotonicity theory which are important for the derivation of convex bounds are introduced. The most wide-spread examples of comonotonicity which emerge in financial context are described. In addition to the upper bound a lower bound can be derived as well. This provides one with a measure of the reliability of the upper bound. The lower bound approach is based on the technique of conditioning. It is obtained by applying Jensen's inequality for conditional expectations to the original sum of dependent random variables. Two slightly different version of conditioning random variable are considered in the context of this thesis. They give rise to two different approaches which are referred to as comonotonic lower bound and comonotonic "maximal variance" lower bound approaches. Special attention is given to the class of distortion risk measures. It is shown that the quantile risk measure as well as Conditional Tail Expectation (under some additional conditions) belong to this class. It is proved that both risk measures being under consideration are additive for a sum of comonotonic random variables, i.e. quantile and Conditional Tail Expectation for a comonotonic upper and lower bounds can easily be obtained by summing the corresponding risk measures of the marginals involved. A special subclass of distortion risk measures which is referred to as class of concave distortion risk measures is also under consideration. It is shown that quantile risk measure is not a concave distortion risk measure while Conditional Tail Expectation (under some additional conditions) is a concave distortion risk measure. A theoretical justification for the fact that "concave" Conditional Tail Expectation preserves convex order relation between random variables is given. It is shown that this property does not necessarily hold for the quantile risk measure, as it is not a concave risk measure. Finally, the accuracy and efficiency of two moment-matching, comonotonic upper bound, comonotonic lower bound and "maximal variance" lower bound approximations are examined for a wide range of parameters by comparing with the results obtained by Monte Carlo simulation. It is justified by numerical results that, generally, in the current situation lower bound approach outperforms other methods. Moreover, the preservation of convex order relation between the convex bounds for the final wealth by Conditional Tail Expectation is demonstrated by numerical results. It is justified numerically that this property does not necessarily hold true for the quantile.
57

市場模型下利率連動債券評價 — 以逆浮動、雪球型、及每日區間型為例 / Callable LIBOR Exotics Valuation in Lognormal Forward LIBOR Model, Cases of Callable Inverse Floater, Callable Cumulative Inverse Floater, and Callable Daily Range Accrual Note

趙子賢, Chao, Tzu-Hsien Unknown Date (has links)
國內結構債市場業已蓬勃發展,市場模型亦相當適合結構債評價。本文在市場模型下,因市場模型不具馬可夫性質,運用最小平方蒙地卡羅法針對三連結標的為LIBOR的結構債進行評價。 / The market of the structured notes has been blossoming. The lognormal forward LIBOR model is more suitable for the valuation of structured notes than do the traditional interest rate models. In this article, we perform three case studies of the valuation of the structured notes linked to LIBOR in lognormal forward LIOBR model. It is easier to implement the lognormal forward LIBOR model by Monte Carlo simulation due to the non-Markovian property. Therefore, the least-squares Monte Carlo approach is used to deal with the callable feature of the structured notes in our case studies.
58

Estimation of energy detection thresholds and error probability for amplitude-modulated short-range communication radios

Anttonen, A. (Antti) 30 November 2011 (has links)
Abstract In this thesis, novel data and channel estimation methods are proposed and analyzed for low-complexity short-range communication (SRC) radios. Low complexity is challenging to achieve especially in very wideband or millimeter-wave SRC radios where phase recovery and energy capture from numerous multipaths easily become a bottleneck for system design. A specific type of transceiver is selected using pulse amplitude modulation (PAM) at the transmitter and energy detection (ED) at the receiver, and it is thus called an ED-PAM system. Nonnegative PAM alphabets allow using an ED structure which enables a phase-unaware detection method for avoiding complicated phase recovery at the receiver. Moreover, the ED-PAM approach results in a simple multipath energy capture, and only one real decision variable, whose dimension is independent of the symbol alphabet size, is needed. In comparison with optimal phase-aware detection, the appealing simplicity of suboptimal ED-PAM systems is achieved at the cost of the need for a higher transmitted signal energy or shorter link distance for obtaining a sufficient signal-to-noise ratio (SNR) at the receiver, as ED-PAM systems are more vulnerable to the effects of noise and interference. On the other hand, the consequences of requiring a higher SNR may not be severe in the type of SRC scenarios where a sufficient received SNR is readily available due to a short link distance. Furthermore, significant interference can be avoided by signal design. However, what has slowed down the development of ED-PAM systems is that efficient symbol decision threshold estimation and related error probability analysis in multipath fading channels have remained as unsolved problems. Based on the above observations, this thesis contributes to the state-of-the-art of the design and analysis for ED-PAM systems as follows. Firstly, a closed-form near-optimal decision threshold selection method, which adapts to a time-varying channel gain and enables an arbitrary choice of the PAM alphabet size and an integer time-bandwidth product of the receiver filters, is proposed. Secondly, two blind estimation schemes of the parameters for the threshold estimation are introduced. Thirdly, analytical error probability evaluation in frequency-selective multipath fading channels is addressed. Special attention is given to lognormal fading channels, which are typically used to model very wideband SRC multipath channels. Finally, analytical error probability evaluation with nonideal parameter estimation is presented. The results can be used in designing low-complexity transceivers for very wideband and millimeter-wave wireless SRC devices of the future. / Tiivistelmä Tässä työssä esitetään ja analysoidaan uusia data- ja kanavaestimointimenetelmiä, joiden tavoitteena on yksinkertaistaa lähikommunikaatiota (short-range communication, SRC) langattomien laitteiden välillä. SRC-radioiden yksinkertainen toteutus on poikkeuksellisen haasteellista silloin, kun käytetään erittäin suurta kaistanleveyttä tai millimetriaaltoalueen tiedonsiirtoa. Tällöin vastaanottimen yksinkertaisen toteutuksen voivat estää esimerkiksi kantoaallon vaiheen estimointi ja signaalienergian kerääminen lukuisilta kanavan monitiekomponenteilta. Näistä lähtökohdista valitaan SRC-radion järjestelmämalliksi positiiviseen pulssiamplitudimodulaatioon (pulse amplitude modulation, PAM) perustuva lähetin ja energiailmaisimeen (energy detection, ED) perustuva vastaanotin. ED-PAM-järjestelmän ei tarvitse tietää vastaanotetun signaalin vaihetta ja signaalienergian kerääminen tapahtuu yksinkertaisen diversiteettiyhdistelytekniikan avulla. Lisäksi ilmaisuun tarvitaan vain yksi reaalinen päätösmuuttuja, jonka dimensio on riippumaton PAM-tasojen määrästä. ED-PAM-tekniikan yksinkertaisuutta optimaaliseen vaihetietoiseen ilmaisuun verrattuna ei saavuteta ilmaiseksi. Yhtenä rajoituksena on alioptimaalisen ED-PAM-tekniikan luontainen taipumus vahvistaa kohinan ja häiriöiden vaikutusta symbolin päätöksenteossa. Kohinan vahvistus ei välttämättä ole suuri ongelma niissä SRC-radioissa, joissa pienen linkkietäisyyden johdosta riittävä signaali-kohinasuhde vastaanottimessa voidaan kohinan vahvistuksesta huolimatta saavuttaa. Myös häiriöiden vahvistuksen vaikutusta voidaan tehokkaasti vähentää signaalisuunnittelulla. Joka tapauksessa ED-PAM-tekniikan käyttöönottoa on hidastanut tehokkaiden symbolipäätöskynnysten estimointi- ja analysointimenetelmien puuttuminen. Edellä mainitut havainnot ovat motivoineet löytämään uusia suunnittelu- ja analyysimenetelmiä ED-PAM-järjestelmille seuraavasti. Symbolipäätöskynnysten estimointiin johdetaan lähes optimaalinen suljetun muodon menetelmä, joka kykenee adaptoitumaan muuttuvassa kanavassa ja valitsemaan mielivaltaisen kokonaisluvun sekä PAM-tasojen määrälle että vastaanottimen aika-kaistanleveystulolle. Lisäksi esitetään kaksi sokeaa päätöskynnysten estimointimenetelmää, jotka eivät tarvitse redundanttista opetussignaalia. Työn toisessa osassa ED-PAM-järjestelmän symbolivirhesuhdetta analysoidaan taajuusselektiivisessä monitiekanavassa. Analyysissä keskitytään log-normaalijakauman mukaan häipyvään kanavaan. Seuraavaksi analyysia laajennetaan ottamalla mukaan epäideaalisten kynnysarvojen estimoinnin vaikutus. Saavutettuja tuloksia voidaan hyödyntää erittäin laajakaistaisten ja millimetriaaltoalueen SRC-laitteiden suunnittelussa.
59

Comparing Approximations for Risk Measures Related to Sums of Correlated Lognormal Random Variables

Karniychuk, Maryna 30 November 2006 (has links)
In this thesis the performances of different approximations are compared for a standard actuarial and financial problem: the estimation of quantiles and conditional tail expectations of the final value of a series of discrete cash flows. To calculate the risk measures such as quantiles and Conditional Tail Expectations, one needs the distribution function of the final wealth. The final value of a series of discrete payments in the considered model is the sum of dependent lognormal random variables. Unfortunately, its distribution function cannot be determined analytically. Thus usually one has to use time-consuming Monte Carlo simulations. Computational time still remains a serious drawback of Monte Carlo simulations, thus several analytical techniques for approximating the distribution function of final wealth are proposed in the frame of this thesis. These are the widely used moment-matching approximations and innovative comonotonic approximations. Moment-matching methods approximate the unknown distribution function by a given one in such a way that some characteristics (in the present case the first two moments) coincide. The ideas of two well-known approximations are described briefly. Analytical formulas for valuing quantiles and Conditional Tail Expectations are derived for both approximations. Recently, a large group of scientists from Catholic University Leuven in Belgium has derived comonotonic upper and comonotonic lower bounds for sums of dependent lognormal random variables. These bounds are bounds in the terms of "convex order". In order to provide the theoretical background for comonotonic approximations several fundamental ordering concepts such as stochastic dominance, stop-loss and convex order and some important relations between them are introduced. The last two concepts are closely related. Both stochastic orders express which of two random variables is the "less dangerous/more attractive" one. The central idea of comonotonic upper bound approximation is to replace the original sum, presenting final wealth, by a new sum, for which the components have the same marginal distributions as the components in the original sum, but with "more dangerous/less attractive" dependence structure. The upper bound, or saying mathematically, convex largest sum is obtained when the components of the sum are the components of comonotonic random vector. Therefore, fundamental concepts of comonotonicity theory which are important for the derivation of convex bounds are introduced. The most wide-spread examples of comonotonicity which emerge in financial context are described. In addition to the upper bound a lower bound can be derived as well. This provides one with a measure of the reliability of the upper bound. The lower bound approach is based on the technique of conditioning. It is obtained by applying Jensen's inequality for conditional expectations to the original sum of dependent random variables. Two slightly different version of conditioning random variable are considered in the context of this thesis. They give rise to two different approaches which are referred to as comonotonic lower bound and comonotonic "maximal variance" lower bound approaches. Special attention is given to the class of distortion risk measures. It is shown that the quantile risk measure as well as Conditional Tail Expectation (under some additional conditions) belong to this class. It is proved that both risk measures being under consideration are additive for a sum of comonotonic random variables, i.e. quantile and Conditional Tail Expectation for a comonotonic upper and lower bounds can easily be obtained by summing the corresponding risk measures of the marginals involved. A special subclass of distortion risk measures which is referred to as class of concave distortion risk measures is also under consideration. It is shown that quantile risk measure is not a concave distortion risk measure while Conditional Tail Expectation (under some additional conditions) is a concave distortion risk measure. A theoretical justification for the fact that "concave" Conditional Tail Expectation preserves convex order relation between random variables is given. It is shown that this property does not necessarily hold for the quantile risk measure, as it is not a concave risk measure. Finally, the accuracy and efficiency of two moment-matching, comonotonic upper bound, comonotonic lower bound and "maximal variance" lower bound approximations are examined for a wide range of parameters by comparing with the results obtained by Monte Carlo simulation. It is justified by numerical results that, generally, in the current situation lower bound approach outperforms other methods. Moreover, the preservation of convex order relation between the convex bounds for the final wealth by Conditional Tail Expectation is demonstrated by numerical results. It is justified numerically that this property does not necessarily hold true for the quantile.
60

Delineating ΔNp63α's function in epithelial cells

Sakaram, Suraj January 2016 (has links)
No description available.

Page generated in 0.0357 seconds