• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 15
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Numerical techniques for optimal investment consumption models

Mvondo, Bernardin Gael January 2014 (has links)
>Magister Scientiae - MSc / The problem of optimal investment has been extensively studied by numerous researchers in order to generalize the original framework. Those generalizations have been made in different directions and using different techniques. For example, Perera [Optimal consumption, investment and insurance with insurable risk for an investor in a Levy market, Insurance: Mathematics and Economics, 46 (3) (2010) 479-484] applied the martingale approach to obtain a closed form solution for the optimal investment, consumption and insurance strategies of an individual in the presence of an insurable risk when the insurable risk and risky asset returns are described by Levy processes and the utility is a constant absolute risk aversion. In another work, Sattinger [The Markov consumption problem, Journal of Mathematical Economics, 47 (4-5) (2011) 409-416] gave a model of consumption behavior under uncertainty as the solution to a continuous-time dynamic control problem in which an individual moves between employment and unemployment according to a Markov process. In this thesis, we will review the consumption models in the above framework and will simulate some of them using an infinite series expansion method − a key focus of this research. Several numerical results obtained by using MATLAB are presented with detailed explanations.
2

Comparison of Different Methods for Estimating Log-normal Means

Tang, Qi 01 May 2014 (has links)
The log-normal distribution is a popular model in many areas, especially in biostatistics and survival analysis where the data tend to be right skewed. In our research, a total of ten different estimators of log-normal means are compared theoretically. Simulations are done using different values of parameters and sample size. As a result of comparison, ``A degree of freedom adjusted" maximum likelihood estimator and Bayesian estimator under quadratic loss are the best when using the mean square error (MSE) as a criterion. The ten estimators are applied to a real dataset, an environmental study from Naval Construction Battalion Center (NCBC), Super Fund Site in Rhode Island.
3

Resource Allocation and Adaptive Antennas in Cellular Communications

Cardieri, Paulo 25 September 2000 (has links)
The rapid growth in demand for cellular mobile communications and emerging fixed wireless access has created the need to increase system capacity through more efficient utilization of the frequency spectrum, and the need for better grade of service. In cellular systems, capacity improvement can be achieved by reducing co-channel interference. Several techniques have been proposed in literature for mitigating co-channel interference, such as adaptive antennas and power control. Also, by allocating transmitter power and communication channels efficiently (resource allocation), overall co-channel interference can be maintained below a desired maximum tolerable level, while maximizing the carried traffic of the system. This dissertation presents investigation results on the performance of base station adaptive antennas, power control and channel allocation, as techniques for capacity improvement. Several approaches are analyzed. Firstly, we study the combined use of adaptive antennas and fractional loading factor, in order to estimate the potential capacity improvement achieved by adaptive antennas. Next, an extensive simulation analysis of a cellular network is carried out aiming to investigate the complex interrelationship between power control, channel allocation and adaptive antennas. In the first part of this simulation analysis, the combined use of adaptive antennas, power control and reduced cluster size is analyzed in a cellular system using fixed channel allocation. In the second part, we analyze the benefits of combining adaptive antennas, dynamic channel allocation and power control. Two representative channel allocation algorithms are considered and analyzed regarding how efficiently they transform reduced co-channel interference into higher carried traffic. Finally, the spatial filtering capability of adaptive antennas is used to allow several users to share the same channel within the same cell. Several allocation algorithms combined with power control are analyzed. / Ph. D.
4

A Study of Designs in Clinical Trials and Schedules in Operating Rooms

Hung, Wan-Ping 20 January 2011 (has links)
The design of clinical trials is one of the important problems in medical statistics. Its main purpose is to determine the methodology and the sample size required of a testing study to examine the safety and efficacy of drugs. It is also a part of the Food and Drug Administration approval process. In this thesis, we first study the comparison of the efficacy of drugs in clinical trials. We focus on the two-sample comparison of proportions to investigate testing strategies based on two-stage design. The properties and advantages of the procedures from the proposed testing designs are demonstrated by numerical results, where comparison with the classical method is made under the same sample size. A real example discussed in Cardenal et al. (1999) is provided to explain how the methods may be used in practice. Some figures are also presented to illustrate the pattern changes of the power functions of these methods. In addition, the proposed procedure is also compared with the Pocock (1997) and O¡¦Brien and Fleming (1979) tests based on the standardized statistics. In the second part of this work, the operating room scheduling problem is considered, which is also important in medical studies. The national health insurance system has been conducted more than ten years in Taiwan. The Bureau of National Health Insurance continues to improve the national health insurance system and try to establish a reasonable fee ratio for people in different income ranges. In accordance to the adjustment of the national health insurance system, hospitals must pay more attention to control the running cost. One of the major hospital's revenues is generated by its surgery center operations. In order to maintain financial balance, effective operating room management is necessary. For this topic, this study focuses on the model fitting of operating times and operating room scheduling. Log-normal and mixture log-normal distributions are identified to be acceptable statistically in describing these operating times. The procedure is illustrated through analysis of thirteen operations performed in the gynecology department of a major teaching hospital in southern Taiwan. The best fitting distributions are used to evaluate performances of some operating combinations on daily schedule, which occurred in real data. The fitted distributions are selected through certain information criteria and bootstrapping the log-likelihood ratio test. Moreover, we also classify the operations into three different categories as well as three stages for each operation. Then based on the classification, a strategy of efficient scheduling is proposed. The benefits of rescheduling based on the proposed strategy are compared with the original scheduling observed.
5

YSCAT Backscatter Distributions

Barrowes, Benjamin E. 14 May 2003 (has links) (PDF)
YSCAT is a unique ultrawideband microwave scatterometer developed to investigate the sea surface under a variety of environmental and radar parameters. The YSCAT94 experiment consisted of a six month deployment on the WAVES research tower operated by the Canada Center for inland Waters (CCIW). Over 3500 hours of data were collected at 2Γ 3.05Γ 5.3Γ 10.02Γ and 14 GHz and at a variety of wind speeds, relative azimuth angles, and incidence angle. A low wind speed "rolloff" of the normalized radar cross section (σ°) in YSCAT94 data is found and quantified. The rolloff wind speedΓ γΓ is estimated through regression estimation analysis using an Epanechnikov kernel. For YSCAT94 data, the rolloff is most noticeable at mid-range incidence angles with γ values ranging from 3 to 6 m/s. In order to characterized YSCAT94 backscatter distributions, a second order polynomial in log space is developed as a model for the probability of the radar cross sectionΓρ(σ°). Following Gotwols and ThompsonΓρ(σ°) is found to adhere to a log-normal distribution for horizontal polarization and a generalized log-normal distribution for vertical polarization. If ρ(α|σ°) is assumed to be Rayleigh distributed, the instantaneous amplitude distribution ρ(α) is found to be the integral of a Rayleigh/generalized log-normal distribution. A robust algorithm is developed to fit this probability density function to YSCAT94 backscatter distributions. The mean and variance of the generalized log-normal distribution are derived to facilitate this algorithm. Over 2700 distinct data cases sorted according to five different frequencies, horizontal and vertical polarizations, upwind and downwind, eight different incidence angles Γ1-10 m/s wind speeds, and 0.1-0.38 mean wave slope are considered. Definite trends are recognizable in the fitted parameters a1Γ a2Γ and C of the Rayleigh/generalized log-normal distribution when sorted according to wind speed and mean wave slope. At mid-range incidence angles, the Rayleigh/generalized log-normal distribution is found to adequately characterize both low and high amplitude portions of YSCAT94 backscatter distributions. However, at higher incidence angels (50°and 60°) the more general Weibull/generalized log-normal distributions is found to better characterized the low amplitude portion of the backscatter distributions.
6

Relaxivita magnetických nanočástic oxidů železa obsahujících diamagnetické kationty / Relaxivity of magnetic iron oxide nanoparticles containing diamagnetic cations

Kubíčková, Lenka January 2017 (has links)
Magnetic nanoparticles have received extensive attention in the biomedical research, e.g. as prospective contrast agents for T2-weighted magnetic resonance imaging. The ability of a contrast agent to enhance the relaxation rate of 1 H in its vicinity is quantified by relaxivity. The main aim of this thesis is to evaluate the transversal re- laxivity of ε-Fe2−x Alx O3 nanoparticles coated with amorphous silica or citrate - its dependence on external magnetic field, temperature and thickness of silica coating - by means of nuclear magnetic resonance. The aluminium content x = 0.23(1) was determined from XRF, the material was further characterised by XRPD, Möss- bauer spectroscopy, DLS, TEM and magnetic measurements. The size of magnetic cores was ∼ 21 nm, the thickness of silica coating ∼ 6,10,17 and 21 nm. Magne- tization of the ε-Fe2−x Alx O3 nanoparticles increased by ∼ 30 % when compared to ε-Fe2O3. The saturating dependence of relaxivity on external magnetic field and on the linear decrease with increase of thickness of silica coating contravene the theo- retical model of motional averaging regime (MAR); nevertheless, the temperature dependence acquired in 0.47 T and 11.75 T may be explained by MAR. In compari- son to ε-Fe2O3 nanoparticles, the relaxivity of examined samples was higher for par-...
7

Risk contribution and its application in asset and risk management for life insurance / Riskbidrag och dess användning i kapital- och riskförvaltning för livförsäkringsbolag

Sundin, Jesper January 2016 (has links)
In risk management one important aspect is the allocation of total portfolio risk into its components. This can be done by measuring each components' risk contribution relative to the total risk, taking into account the covariance between components. The measurement procedure is straightforward under assumptions of elliptical distributions but not under the commonly used multivariate log-normal distributions. Two portfolio strategies are considered, the "buy and hold" and the "constant mix" strategy. The profits and losses of the components of a generic portfolio strategy are defined in order to enable a proper definition of risk contribution for the constant mix strategy. Then kernel estimation of risk contribution is performed for both portfolio strategies using Monte Carlo simulation. Further, applications for asset and risk management with risk contributions are discussed in the context of life insurance. / En viktig aspekt inom riskhantering är tilldelning av total portföljrisk till tillångsportföljens beståndsdelar. Detta kan åstadkommas genom att mäta riskbidrag, som även kan ta hänsyn till beroenden mellan risktillgångar. Beräkning av riskbidrag är enkel vid antagande om elliptiska fördelningar så som multivariat normalfördelning, men inte vid antagande om multivariat log-normalfördelning där analytiska formler saknas. Skillnaden mellan riskbidragen inom två portföljstrategier undersöks. Dessa strategier är "buy and hold" och "constant mix" (konstant ombalansering). Tilldelning av resultaten hos de olika beståndsdelarna med en generisk portföljstrategi härleds för att kunna definiera riskbidrag för "constant mix" portföljstrategin. "Kernel estimering" används för att estimera riskbidrag genom simulering. Vidare diskuteras applikationer för tillgångs- och riskhantering inom ramen för livförsäkringsbolag.
8

雙變量脆弱性韋伯迴歸模式之研究

余立德, Yu, Li-Ta Unknown Date (has links)
摘要 本文主要考慮群集樣本(clustered samples)的存活分析,而每一群集中又分為兩種組別(groups)。假定同群集同組別內的個體共享相同但不可觀測的隨機脆弱性(frailty),因此面臨的是雙變量脆弱性變數的多變量存活資料。首先,驗證雙變量脆弱性對雙變量對數存活時間及雙變量存活時間之相關係數所造成的影響。接著,假定雙變量脆弱性服從雙變量對數常態分配,條件存活時間模式為韋伯迴歸模式,我們利用EM法則,推導出雙變量脆弱性之多變量存活模式中母數的估計方法。 關鍵詞:雙變量脆弱性,Weibull迴歸模式,對數常態分配,EM法則 / Abstract Consider survival analysis for clustered samples, where each cluster contains two groups. Assume that individuals within the same cluster and the same group share a common but unobservable random frailty. Hence, the focus of this work is on bivariate frailty model in analysis of multivariate survival data. First, we derive expressions for the correlation between the two survival times to show how the bivariate frailty affects these correlation coefficients. Then, the bivariate log-normal distribution is used to model the bivariate frailty. We modified EM algorithm to estimate the parameters for the Weibull regression model with bivariate log-normal frailty. Key words:bivariate frailty, Weibull regression model, log-normal distribution, EM algorithm.
9

Single and Multiple Emitter Localization in Cognitive Radio Networks

Ureten, Suzan January 2017 (has links)
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
10

[en] A POISSON-LOGNORMAL MODEL TO FORECAST THE IBNR QUANTITY VIA MICRO-DATA / [pt] UM MODELO POISSON-LOGNORMAL PARA PREVISÃO DA QUANTIDADE IBNR VIA MICRO-DADOS

JULIANA FERNANDES DA COSTA MACEDO 02 February 2016 (has links)
[pt] O principal objetivo desta dissertação é realizar a previsão da reserva IBNR. Para isto foi desenvolvido um modelo estatístico de distribuições combinadas que busca uma adequada representação dos dados. A reserva IBNR, sigla em inglês para Incurred But Not Reported, representa o montante que as seguradoras precisam ter para pagamentos de sinistros atrasados, que já ocorreram no passado, mas ainda não foram avisados à seguradora até a data presente. Dada a importância desta reserva, diversos métodos para estimação da reserva IBNR já foram propostos. Um dos métodos mais utilizado pelas seguradoras é o Método Chain Ladder, que se baseia em triângulos run-off, que é o agrupamento dos dados conforme data de ocorrência e aviso de sinistro. No entanto o agrupamento dos dados faz com que informações importantes sejam perdidas. Esta dissertação baseada em outros artigos e trabalhos que consideram o não agrupamento dos dados, propõe uma nova modelagem para os dados não agrupados. O modelo proposto combina a distribuição do atraso no aviso da ocorrência, representada aqui pela distribuição log-normal truncada (pois só há informação até a última data observada); a distribuição da quantidade total de sinistros ocorridos num dado período, modelada pela distribuição Poisson; e a distribuição do número de sinistros ocorridos em um dado período e avisados até a última data observada, que será caracterizada por uma distribuição Binomial. Por fim, a quantidade de sinistros IBNR foi estimada por método e pelo Chain Ladder e avaliou-se a capacidade de previsão de ambos. Apesar da distribuição de atrasos do modelo proposto se adequar bem aos dados, o modelo proposto obteve resultados inferiores ao Chain Ladder em termos de previsão. / [en] The main objective of this dissertation is to predict the IBNR reserve. For this, it was developed a statistical model of combined distributions looking for a new distribution that fits the data well. The IBNR reserve, short for Incurred But Not Reported, represents the amount that insurers need to have to pay for the claims that occurred in the past but have not been reported until the present date. Given the importance of this reserve, several methods for estimating this reserve have been proposed. One of the most used methods for the insurers is the Chain Ladder, which is based on run-off triangles; this is a format of grouping the data according to the occurrence and the reported date. However this format causes the lost of important information. This dissertation, based on other articles and works that consider the data not grouped, proposes a new model for the non-aggregated data. The proposed model combines the delay in the claim report distribution represented by a log normal truncated (because there is only information until the last observed date); the total amount of claims incurred in a given period modeled by a Poisson distribution and the number of claims occurred in a certain period and reported until the last observed date characterized by a binomial distribution. Finally, the IBNR reserve was estimated by this method and by the chain ladder and the prediction capacity of both methods will be evaluated. Although the delay distribution seems to fit the data well, the proposed model obtained inferior results to the Chain Ladder in terms of forecast.

Page generated in 0.0967 seconds