• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 6
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 35
  • 35
  • 20
  • 12
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

YSCAT Backscatter Distributions

Barrowes, Benjamin E. 14 May 2003 (has links) (PDF)
YSCAT is a unique ultrawideband microwave scatterometer developed to investigate the sea surface under a variety of environmental and radar parameters. The YSCAT94 experiment consisted of a six month deployment on the WAVES research tower operated by the Canada Center for inland Waters (CCIW). Over 3500 hours of data were collected at 2Γ 3.05Γ 5.3Γ 10.02Γ and 14 GHz and at a variety of wind speeds, relative azimuth angles, and incidence angle. A low wind speed "rolloff" of the normalized radar cross section (σ°) in YSCAT94 data is found and quantified. The rolloff wind speedΓ γΓ is estimated through regression estimation analysis using an Epanechnikov kernel. For YSCAT94 data, the rolloff is most noticeable at mid-range incidence angles with γ values ranging from 3 to 6 m/s. In order to characterized YSCAT94 backscatter distributions, a second order polynomial in log space is developed as a model for the probability of the radar cross sectionΓρ(σ°). Following Gotwols and ThompsonΓρ(σ°) is found to adhere to a log-normal distribution for horizontal polarization and a generalized log-normal distribution for vertical polarization. If ρ(α|σ°) is assumed to be Rayleigh distributed, the instantaneous amplitude distribution ρ(α) is found to be the integral of a Rayleigh/generalized log-normal distribution. A robust algorithm is developed to fit this probability density function to YSCAT94 backscatter distributions. The mean and variance of the generalized log-normal distribution are derived to facilitate this algorithm. Over 2700 distinct data cases sorted according to five different frequencies, horizontal and vertical polarizations, upwind and downwind, eight different incidence angles Γ1-10 m/s wind speeds, and 0.1-0.38 mean wave slope are considered. Definite trends are recognizable in the fitted parameters a1Γ a2Γ and C of the Rayleigh/generalized log-normal distribution when sorted according to wind speed and mean wave slope. At mid-range incidence angles, the Rayleigh/generalized log-normal distribution is found to adequately characterize both low and high amplitude portions of YSCAT94 backscatter distributions. However, at higher incidence angels (50°and 60°) the more general Weibull/generalized log-normal distributions is found to better characterized the low amplitude portion of the backscatter distributions.
22

Relaxivita magnetických nanočástic oxidů železa obsahujících diamagnetické kationty / Relaxivity of magnetic iron oxide nanoparticles containing diamagnetic cations

Kubíčková, Lenka January 2017 (has links)
Magnetic nanoparticles have received extensive attention in the biomedical research, e.g. as prospective contrast agents for T2-weighted magnetic resonance imaging. The ability of a contrast agent to enhance the relaxation rate of 1 H in its vicinity is quantified by relaxivity. The main aim of this thesis is to evaluate the transversal re- laxivity of ε-Fe2−x Alx O3 nanoparticles coated with amorphous silica or citrate - its dependence on external magnetic field, temperature and thickness of silica coating - by means of nuclear magnetic resonance. The aluminium content x = 0.23(1) was determined from XRF, the material was further characterised by XRPD, Möss- bauer spectroscopy, DLS, TEM and magnetic measurements. The size of magnetic cores was ∼ 21 nm, the thickness of silica coating ∼ 6,10,17 and 21 nm. Magne- tization of the ε-Fe2−x Alx O3 nanoparticles increased by ∼ 30 % when compared to ε-Fe2O3. The saturating dependence of relaxivity on external magnetic field and on the linear decrease with increase of thickness of silica coating contravene the theo- retical model of motional averaging regime (MAR); nevertheless, the temperature dependence acquired in 0.47 T and 11.75 T may be explained by MAR. In compari- son to ε-Fe2O3 nanoparticles, the relaxivity of examined samples was higher for par-...
23

Risk contribution and its application in asset and risk management for life insurance / Riskbidrag och dess användning i kapital- och riskförvaltning för livförsäkringsbolag

Sundin, Jesper January 2016 (has links)
In risk management one important aspect is the allocation of total portfolio risk into its components. This can be done by measuring each components' risk contribution relative to the total risk, taking into account the covariance between components. The measurement procedure is straightforward under assumptions of elliptical distributions but not under the commonly used multivariate log-normal distributions. Two portfolio strategies are considered, the "buy and hold" and the "constant mix" strategy. The profits and losses of the components of a generic portfolio strategy are defined in order to enable a proper definition of risk contribution for the constant mix strategy. Then kernel estimation of risk contribution is performed for both portfolio strategies using Monte Carlo simulation. Further, applications for asset and risk management with risk contributions are discussed in the context of life insurance. / En viktig aspekt inom riskhantering är tilldelning av total portföljrisk till tillångsportföljens beståndsdelar. Detta kan åstadkommas genom att mäta riskbidrag, som även kan ta hänsyn till beroenden mellan risktillgångar. Beräkning av riskbidrag är enkel vid antagande om elliptiska fördelningar så som multivariat normalfördelning, men inte vid antagande om multivariat log-normalfördelning där analytiska formler saknas. Skillnaden mellan riskbidragen inom två portföljstrategier undersöks. Dessa strategier är "buy and hold" och "constant mix" (konstant ombalansering). Tilldelning av resultaten hos de olika beståndsdelarna med en generisk portföljstrategi härleds för att kunna definiera riskbidrag för "constant mix" portföljstrategin. "Kernel estimering" används för att estimera riskbidrag genom simulering. Vidare diskuteras applikationer för tillgångs- och riskhantering inom ramen för livförsäkringsbolag.
24

雙變量脆弱性韋伯迴歸模式之研究

余立德, Yu, Li-Ta Unknown Date (has links)
摘要 本文主要考慮群集樣本(clustered samples)的存活分析,而每一群集中又分為兩種組別(groups)。假定同群集同組別內的個體共享相同但不可觀測的隨機脆弱性(frailty),因此面臨的是雙變量脆弱性變數的多變量存活資料。首先,驗證雙變量脆弱性對雙變量對數存活時間及雙變量存活時間之相關係數所造成的影響。接著,假定雙變量脆弱性服從雙變量對數常態分配,條件存活時間模式為韋伯迴歸模式,我們利用EM法則,推導出雙變量脆弱性之多變量存活模式中母數的估計方法。 關鍵詞:雙變量脆弱性,Weibull迴歸模式,對數常態分配,EM法則 / Abstract Consider survival analysis for clustered samples, where each cluster contains two groups. Assume that individuals within the same cluster and the same group share a common but unobservable random frailty. Hence, the focus of this work is on bivariate frailty model in analysis of multivariate survival data. First, we derive expressions for the correlation between the two survival times to show how the bivariate frailty affects these correlation coefficients. Then, the bivariate log-normal distribution is used to model the bivariate frailty. We modified EM algorithm to estimate the parameters for the Weibull regression model with bivariate log-normal frailty. Key words:bivariate frailty, Weibull regression model, log-normal distribution, EM algorithm.
25

Variational Bayesian Learning and its Applications

Zhao, Hui January 2013 (has links)
This dissertation is devoted to studying a fast and analytic approximation method, called the variational Bayesian (VB) method, and aims to give insight into its general applicability and usefulness, and explore its applications to various real-world problems. This work has three main foci: 1) The general applicability and properties; 2) Diagnostics for VB approximations; 3) Variational applications. Generally, the variational inference has been developed in the context of the exponential family, which is open to further development. First, it usually consider the cases in the context of the conjugate exponential family. Second, the variational inferences are developed only with respect to natural parameters, which are often not the parameters of immediate interest. Moreover, the full factorization, which assumes all terms to be independent of one another, is the most commonly used scheme in the most of the variational applications. We show that VB inferences can be extended to a more general situation. We propose a special parameterization for a parametric family, and also propose a factorization scheme with a more general dependency structure than is traditional in VB. Based on these new frameworks, we develop a variational formalism, in which VB has a fast implementation, and not be limited to the conjugate exponential setting. We also investigate its local convergence property, the effects of choosing different priors, and the effects of choosing different factorization scheme. The essence of the VB method relies on making simplifying assumptions about the posterior dependence of a problem. By definition, the general posterior dependence structure is distorted. In addition, in the various applications, we observe that the posterior variances are often underestimated. We aim to develop diagnostics test to assess VB approximations, and these methods are expected to be quick and easy to use, and to require no sophisticated tuning expertise. We propose three methods to compute the actual posterior covariance matrix by only using the knowledge obtained from VB approximations: 1) To look at the joint posterior distribution and attempt to find an optimal affine transformation that links the VB and true posteriors; 2) Based on a marginal posterior density approximation to work in specific low dimensional directions to estimate true posterior variances and correlations; 3) Based on a stepwise conditional approach, to construct and solve a set of system of equations that lead to estimates of the true posterior variances and correlations. A key computation in the above methods is to calculate a uni-variate marginal or conditional variance. We propose a novel way, called the VB Adjusted Independent Metropolis-Hastings (VBAIMH) method, to compute these quantities. It uses an independent Metropolis-Hastings (IMH) algorithm with proposal distributions configured by VB approximations. The variance of the target distribution is obtained by monitoring the acceptance rate of the generated chain. One major question associated with the VB method is how well the approximations can work. We particularly study the mean structure approximations, and show how it is possible using VB approximations to approach model selection tasks such as determining the dimensionality of a model, or variable selection. We also consider the variational application in Bayesian nonparametric modeling, especially for the Dirichlet process (DP). The posterior inference for DP has been extensively studied in the context of MCMC methods. This work presents a a full variational solution for DP with non-conjugate settings. Our solution uses a truncated stick-breaking representation. We propose an empirical method to determine the number of distinct components in a finite dimensional DP. The posterior predictive distribution for DP is often not available in a closed form. We show how to use the variational techniques to approximate this quantity. As a concrete application study, we work through the VB method on regime-switching lognormal models and present solutions to quantify both the uncertainty in the parameters and model specification. Through a series numerical comparison studies with likelihood based methods and MCMC methods on the simulated and real data sets, we show that the VB method can recover exactly the model structure, gives the reasonable point estimates, and is very computationally efficient.
26

Single and Multiple Emitter Localization in Cognitive Radio Networks

Ureten, Suzan January 2017 (has links)
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
27

Modelling Low Dimensional Neural Activity / Modellering av lågdimensionell neural aktivitet

Wärnberg, Emil January 2016 (has links)
A number of recent studies have shown that the dimensionality of the neural activity in the cortex is low. However, what network structures are capable of producing such activity is not theoretically well understood. In this thesis, I discuss a few possible solutions to this problem, and demonstrate that a network with a multidimensional attractor can give rise to such low dimensional activity. The network is created using the Neural Engineering Framework, and exhibits several biologically plausible features, including a log-normal distribution of the synaptic weights. / Ett antal nyligen publicerade studier has visat att dimensionaliten för neural aktivitet är låg. Dock är det inte klarlagt vilka nätverksstrukturer som kan uppbringa denna typ av aktivitet. I denna uppsats diskuterar jag möjliga lösningsförslag, och demonstrerar att ett nätverk med en flerdimensionell attraktor ger upphov till lågdimensionell aktivitet. Nätverket skapas med hjälp av the Neural Engineering Framework, och uppvisar ett flertal biologiskt trovärdiga egenskaper. I synnerhet är fördelningen av synapsvikter log-normalt fördelad.
28

Unsupervised Change Detection Using Multi-Temporal SAR Data : A Case Study of Arctic Sea Ice / Oövervakad förändringsdetektion med multitemporell SAR data : En fallstudie över arktisk havsis

Fröjse, Linda January 2014 (has links)
The extent of Arctic sea ice has decreased over the years and the importance of sea ice monitoring is expected to increase. Remote sensing change detection compares images acquired over the same geographic area at different times in order to identify changes that might have occurred in the area of interest. Change detection methods have been developed for cryospheric topics. The Kittler-Illingworth thresholding algorithm has proven to be an effective change detection tool, but has not been used for sea ice. Here it is applied to Arctic sea ice data. The objective is to investigate the unsupervised detection of changes in Arctic sea ice using multi-temporal SAR images. The well-known Kittler-Illingworth algorithm is tested using two density function models, i.e., the generalized Gaussian and the log-normal model. The difference image is obtained using the modified ratio operator. The histogram of the change image, which approximates its probability distribution, is considered to be a combination of two classes, i.e., the changed and unchanged classes. Histogram fitting techniques are used to estimate the unknown density functions and the prior probabilities. The optimum threshold is selected using a criterion function directly related to classification error. In this thesis three datasets were used covering parts of the Beaufort Sea from the years 1992, 2002, 2007 and 2009. The SAR and ASAR C-band data came from satellites ERS and ENVISAT respectively. All three were interpreted visually. For all three datasets, the generalized Gaussian detected a lot of change, whereas the log-normal detected less. Only one small subset of a dataset was validated against reference data. The log-normal distribution then obtained 0% false alarm rate through all trials. The generalized Gaussian obtained false alarm rates around 4% for most of the trials. The generalized Gaussian achieved detection accuracies around 95%, whereas the log-normal achieved detection accuracies around 70%. The overall accuracies for the generalized Gaussian were about 95% in most trials. The log-normal achieved overall accuracies at around 85%. The KHAT for the generalized Gaussian was in the range of 0.66-0.93. The KHAT for log-normal was in the range of 0.68-0.77. Using one additional speckle filter iteration increased the accuracy for the log-normal distribution. Generally, the detection of positive change has been accomplished with higher level of accuracy compared with negative change detection. A visual inspection shows that the generalized Gaussian distribution probably over-estimates the change. The log-normal distribution consistently detects less change than the generalized Gaussian. Lack of validation data made validation of the results difficult. The performed validation might not be reliable since the available validation data was only SAR imagery and differentiating change and no-change is difficult in the area. Further due to the lack of reference data it could not be decided, with certainty, which distribution performed the best. / Ytan av arktisk havsis har minskat genom åren och vikten av havsisövervakning förväntas öka. Förändrigsdetection jämför bilder från samma geografiska område från olika tidpunkter föra att identifiera förändringar som kan ha skett i intresseområdet. Förändringsdekteringsmetoder har utvecklats för kryosfäriska ämnen. Tröskelvärdesbestämning med Kittler-Illingworth algoritmen har visats sig vara ett effektivt verktyg för förändringsdetektion, men har inte änvänts på havsis. Här appliceras algoritmen på arktisk havsis. Målet är att undersökra oövervakad förändringsdetektion i arktisk havsis med multitemporella SAR bilder. Den välkända Kittler-Illingworth algoritmen testas med två täthetsfunktioner, nämligen generaliserad normaldistribution och log-normal distributionen. Differensbilden erhålls genom den modifierad ratio-operator. Histogrammet från förändringsbilden skattar dess täthetsfunktion, vilken anses vara en kombination av två klasser, förändring- och ickeförändringsklasser. Histogrampassningstekniker används för att uppskatta de okända täthetsfunktionerna och a priori sannolikheterna. Det optimala tröskelvärdet väljs genom en kriterionfunktion som är direkt relaterad till klassifikationsfel. I detta examensarbete användes tre dataset som täcker delar av Beaufort-havet från åren 1992, 2002, 2007 och 2009. SAR C-band data kom från satelliten ERS och ASAR C-band data kom från satelliten ENVISAT. Alla tre tolkades visuellt och för alla tre detekterade generaliserad normaldistribution mycket mer förändring än lognormal distributionen. Bara en mindre del av ett dataset validerades mot referensdata. Lognormal distributionen erhöll då 0% falska alarm i alla försök. Generalised normaldistributionen erhöll runt 4% falska alarm i de flesta försöken. Generaliserad normaldistributionen nådde detekteringsnoggrannhet runt 95% medan lognormal distributionen nådde runt 70%. Generell noggrannheten för generaliserad normaldistributionen var runt 95% i flesta försöken. För lognormal distributionen nåddes en generell noggrannhet runt 85%. KHAT koefficienten för generaliserad normaldistributionen var i intervallet 0.66-0.93. För lognormal distributionen var den i intervallet 0.68-0.77. Med en extra speckle-filtrering ökades nogranneheten för lognormal distributionen. Generellt sett, detekterades positiv förändring med högre nivå av noggrannhet än negativ förändring. Visuell inspektion visar att generaliserad normaldistribution troligen överskattar förändringen. Lognormal distributionen detekterar konsistent mindre förändring än generaliserad normaldistributionen. Bristen på referensdata gjorde valideringen av resultaten svårt. Den utförda valideringen är kanske inte så trovärdig, eftersom den tillgänliga referensdatan var bara SAR bilder och att särskilja förändring och ickeförändring är svårt i området. Vidare, på grund av bristen på referensdata, kunde det inte bestämmas med säkerhet vilken distribution som var bäst.
29

Empirical evaluation of a Markovian model in a limit order market

Trönnberg, Filip January 2012 (has links)
A stochastic model for the dynamics of a limit order book is evaluated and tested on empirical data. Arrival of limit, market and cancellation orders are described in terms of a Markovian queuing system with exponentially distributed occurrences. In this model, several key quantities can be analytically calculated, such as the distribution of times between price moves, price volatility and the probability of an upward price move, all conditional on the state of the order book. We show that the exponential distribution poorly fits the occurrences of order book events and further show that little resemblance exists between the analytical formulas in this model and the empirical data. The log-normal and Weibull distribution are suggested as replacements as they appear to fit the empirical data better.
30

Implementing the circularly polarized light method for determining wall thickness of cellulosic fibres

Edvinsson, Marcus January 2012 (has links)
The wall thickness of pulp fibers plays a major role in the paper industry, but it is currently not possible to measure this property without manual laboratory work. In 2007, researcher Ho Fan Jang patented a technique to automatically measure fiber wall thickness, combining the unique optical properties of pulp fibers with image analysis. In short, the method creates images through the use of an optical system resulting in color values which demonstrate the retardation of a particular wave length instead of the intensity. A device based on this patent has since been developed by Eurocon Analyzer. This thesis investigates the software aspects of this technique, using sample images generated by the Eurocon Analyzer prototype. The software developed in this thesis has been subdivided into three groups for independent consideration. First being the problem of solving wall thickness for colors in the images. Secondly, the image analysis process of identifying fibers and good points for measuring them. Lastly, it is investigated how statistical analysis can be applied to improve results and derive other useful properties such as fiber coarseness. With the use of this technique there are several problems which need to be overcome. One such problem is that it may be difficult to disambiguate the colors produced by fibers of different thickness. This complication may be reduced by using image analysis and statistical analysis. Another challenge can be that theoretical values often differ greatly from the observed values which makes the computational aspect of the method problematic. The results of this thesis show that the effects of these problems can be greatly reduced and that the method offers promising results. The results clearly distinguish between and show the expected characteristics of different pulp samples, but more qualitative reference measurements are needed in order to draw conclusions on the correctness of the results.

Page generated in 0.0857 seconds