161 |
Analysis of Longitudinal Data with Missing Values. : Methods and Applications in Medical Statistics.Dragset, Ingrid Garli January 2009 (has links)
Missing data is a concept used to describe the values that are, for some reason, not observed in datasets. Most standard analysis methods are not feasible for datasets with missing values. The methods handling missing data may result in biased and/or imprecise estimates if methods are not appropriate. It is therefore important to employ suitable methods when analyzing such data. Cardiac surgery is a procedure suitable for patients suffering from different types of heart diseases. It is a physical and psychical demanding surgical operation for the patients, although the mortality rate is low. Health-related quality of life (HRQOL) is a popular and widespread measurement tool to monitor the overall situation of patients undergoing cardiac surgery, especially in elderly patients with naturally limited life expectancies [Gjeilo, 2009]. There has been a growing attention to possible differences between men and women with respect to HRQOL after cardiac surgery. The literature is not consistent regarding this topic. Gjeilo et al. [2008] studied HRQOL in patients before and after cardiac surgery with emphasis on differences between men and women. In the period from September 2004 to September 2005, 534 patients undergoing cardiac surgery at St Olavs Hospital were included in the study. HRQOL were measured by the self-reported questionnaires Short-Form 36 (SF-36) and the Brief Pain Inventory (BPI) before surgery and at six and twelve months follow-up. The SF-36 reflects health-related quality of life measuring eight conceptual domains of health [Loge and Kaasa, 1998]. Some of the patients have not responded to all questions, and there are missing values in the records for about 41% of the patients. Women have more missing values than men at all time points. The statistical analyses performed in Gjeilo et al. [2008] employ the complete-case method, which is the most common method to handle missing data until recent years. The complete-case method discards all subjects with unobserved data prior to the analyses. It makes standard statistical analyses accessible and is the default method to handle missing data in several statistical software packages. The complete-case method gives correct estimates only if data are missing completely at random without any relation to other observed or unobserved measurements. This assumption is seldom met, and violations can result in incorrect estimates and decreased efficiency. The focus of this paper is on improved methods to handle missing values in longitudinal data, that is observations of the same subjects at multiple occasions. Multiple imputation and imputation by expectation maximization are general methods that can be applied with many standard analysis methods and several missing data situations. Regression models can also give correct estimates and are available for longitudinal data. In this paper we present the theory of these approaches and application to the dataset introduced above. The results are compared to the complete-case analyses published in Gjeilo et al. [2008], and the methods are discussed with respect to their properties of handling missing values in this setting. The data of patients undergoing cardiac surgery are analyzed in Gjeilo et al. [2008] with respect to gender differences at each of the measurement occasions; Presurgery, six months, and twelve months after the operation. This is done by a two-sample Student's t-test assuming unequal variances. All patients observed at the relevant occasion is included in the analyses. Repeated measures ANOVA are used to determine gender differences in the evolution of the HRQOL-variables. Only patients with fully observed measurements at all three occasions are included in the ANOVA. The methods of expectation maximization (EM) and multiple imputation (MI) are used to obtain plausible complete datasets including all patients. EM gives a single imputed dataset that can be analyzed similar to the complete-case analysis. MI gives multiple imputed datasets where all dataset must be analyzed sepearately and their estimates combined according to a technique called Rubin's rules. Results of both Student's t-tests and repeated measures ANOVA can be performed by these imputation methods. The repeated measures ANOVA can be expressed as a regression equation that describes the HRQOL-score improvement in time and the variation between subjects. The mixed regression models (MRM) are known to model longitudinal data with non-responses, and can further be extended from the repeated measures ANOVA to fit data more sufficiently. Several MRM are fitted to the data of cardiac surgery patients to display their properties and advantages over ANOVA. These models are alternatives to the imputation analyses when the aim is to determine gender differences in improvement of HRQOL after surgery. The imputation methods and mixed regression models are assumed to handle missing data in an adequate way, and gives similar analysis results for all methods. These results differ from the complete-case method results for some of the HRQOL-variables when examining the gender differences in improvement of HRQOL after surgery.
|
162 |
Parametrization of multi-dimensional Markov chains for rock type modelingNerhus, Steinar January 2009 (has links)
A parametrization of a multidimensional Markov chain model (MDMC) is studied with the goal of capturing texture in training images. The conditional distribution function of each row in the image, given the previous rows, is described as a one-dimensional Markov random field (MRF) that depends only on information in the immediately preceding rows. Each of these conditional distribution functions is then an element of a Markov chain that is used to describe the entire image. The parametrization is based on the cliques in the MRF, using different parameters for different clique types with different colors, and for how many rows backward we can trace the same clique type with the same color. One of the advantages with the MDMC model is that we are able to calculate the normalizing constant very efficiently thanks to the forward-backward algorithm. When the normalizing constant can be calculated we are able to use a numerical optimization routine from R to estimate model parameters through maximum likelihood, and we can use the backward iterations of the forward-backward algorithm to draw realizations from the model. The method is tested on three different training images, and the results show that the method is able to capture some of the texture in all images, but that there is room for improvements. It is reasonable to believe that we can get better results if we change the parametrization. We also see that the result changes if we use the columns, instead of the rows, as the one-dimensional MRF. The method was only tested on images with two colors, and we suspect that it will not work for images with more colors, unless there are no correlation between the colors, due to the choice of parametrization.
|
163 |
An empirical study of the maximum pseudo-likelihood for discrete Markov random fields.Fauske, Johannes January 2009 (has links)
In this text we will look at two parameter estimation methods for Markov random fields on a lattice. They are maximum pseudo-likelihood estimation and maximum general pseudo-likelihood estimation, which we abbreviate MPLE and MGPLE. The idea behind them is that by maximizing an approximation of the likelihood function, we avoid computing cumbersome normalising constants. In MPLE we maximize the product of the conditional distributions for each variable given all the other variables. In MGPLE we use a compromise between pseudo-likelihood and the likelihood function as the approximation. We evaluate and compare the performance of MPLE and MGPLE on three different spatial models, which we have generated observations of. We are specially interested to see what happens with the quality of the estimates when the number of observations increases. The models we use are the Ising model, the extended Ising model and the Sisim model. All the random variables in the models have two possible states, black or white. For the Ising and extended Ising model we have one and three parameters respectively. For Sisim we have $13$ parameters. The quality of both methods get better when the number of observations grow, and MGPLE gives better results than MPLE. However certain parameter combinations of the extended Ising model give worse results.
|
164 |
Numerical Methods for Optical Interference FiltersMarthinsen, Håkon January 2009 (has links)
We present the physics behind general optical interference filters and the design of dielectric anti-reflective filters. These can be anti-reflective at a single wavelength or in an interval. We solve the first case exactly for single and multiple layers and then present how the second case can be solved through the minimisation of an objective function. Next, we present several optimisation methods that are later used to solve the design problem. Finally, we test the different optimisation methods on a test problem and then compare the results with those obtained by the OpenFilters computer programme.
|
165 |
Identity Protection, Secrecy and Authentication in Protocols with compromised AgentsBåtstrand, Anders Lindholm January 2009 (has links)
The design of security protocols is given an increasing level of academic interest, as an increasing number of important tasks are done over the Internet. Among the fields being researched is formal methods for modeling and verification of security protocols. One such method is developed by Cremers and Mauw. This is the method we have chosen to focus on in this paper. The model by Cremers and Mauw specifies a mathematical way to represent security protocols and their execution. It then defines conditions the protocols can fulfill, which is called security requirements. These typically states that in all possible executions, given a session in which all parties are honest, certain mathematical statements hold. Our aim is to extend the security requirements already defined in the model to allow some parties in the session to be under control of an attacker, and to add a new definition of identity protection. This we have done by slightly extending the model, and stating a new set of security requirements.
|
166 |
A General Face Recognition SystemManikarnika, Achim Sanjay January 2006 (has links)
In this project a real-time face detection and recognition system has been discussed and implemented. The main focus has been on the detection process which is the first and most important step before starting with the actual recognition. Computably intensive can give good results, but at the cost of the execution speed. The implemented algorithm which was done is project is build upon the work of Garcia, C. and Tziritas, but the algorithm accuracy is traded for faster speed. The program needs between 1-5 seconds on a standard workstation to analyze an image. On an image database with a lot of variety in the images, the system found 70-75% of the faces.
|
167 |
Hilberttransformpar og negativ brytning / Hilbert Transform Relations and Negative RefractionLind-Johansen, Øyvind January 2006 (has links)
I løpet av de siste årene har det blitt mulig å lage medier som har permittivitet $epsilon_r=chi_e+1$ og permeabilitet $mu_r=chi_m+1$ med simultant negative realdeler. I slike medier vil man få negativ brytning og dette kan utnyttes til å lage en linse som i prinsippet kan få ubegrenset høy oppløsning for en frekvens. La $chi=u+iv$ stå for enten $chi_e$ eller $chi_m$. Jeg viser at oppløsningen til linsa er gitt ved $-ln{(|chi+2|/2)}/d$ forutsatt at tykkelsen $d$ på linsa er noe mindre enn en bølgelengde. Vi ser ut fra dette at oppløsningen er uendelig hvis $u=-2$ og $v=0$ og at vi, for å få høyest mulig oppløsning, ønsker å minimere $|chi+2|=sqrt{(u+2)^2+v^2}$. Kausalitet fører til at $chi$ er element i rommet $H^2$ av analytiske og kvadratisk integrerbare funksjoner i det øvre halvplan. Det følger av dette at $u$ og $v$ er hilberttransformpar. Videre vet vi at $chi$ er hermitsk og at $v$ er positiv for positive argumenter som reflekterer passitivitetsprinsippet for elektromagnetiske medier. Tilsammen setter dette grenser for hvor høy oppløsningen kan bli på et frekvensintervall. Nylig er det funnet en parametrisering av imaginærdelen til slike funksjoner på et frekvensintervall, gitt at realdelen er konstant på intervallet. Jeg identifiserer disse funksjonene som et element i en større klasse hermitske $H^2$-funksjoner hvor imaginærdelen kan parametriseres. Spesielt er det interessant å finne absolutte nedre grenser for den $L^infty$-normen til $|chi+2|$ på intervallet. Det viser seg at ved å sette realdelen lik $x^2/b^2-(a^2+b^2)/(2b^2)-2$ på intervallet kan man omtrent halvere denne nedre grensa i forhold til tilfellet hvor realdelen er konstant lik $-2$.
|
168 |
Haarmål og høyreregulær representasjon på kompakte kvantegrupper / Haar Measure and Right Regular Representations of Compact Quantum GroupsBertelsen, Vegard Steine January 2006 (has links)
I denne oppgaven vil vi definere og vise eksistens av haarmål på kompakte kvantegrupper, se på hvordan vi ved hjelp av haarmålet kan konstruere en høyreregulær representasjon. Vi vil så gjøre dette ganske eksplisitt for kvante-SU(2).
|
169 |
Stokastisk analyse av råtepotensial i huskledning / Stochastic Analysis of Mould Growth Rate in House CladdingStokkenes Johansen, Øivind January 2007 (has links)
Det er benyttet regresjonsanalyse for å analysere innsamlet data. For å unngå eller begrense effektene av multikolinearitet er ridgeregresjon tatt i bruk. Fordi målingene er korrelerte med hverandre er generaliserte minstekvadraters metode benyttet. Det er forsøkt å besvare problemstillingen i tre situasjoner. Den første skal være representativ for forhold lignende de teststasjonen er utsatt for. Den andre og tredje skal representere situasjoner som er henholdsvis svakt og sterkt eksponert for råtevekst.
|
170 |
Lineær mikset modell for kompressor data med en applikasjon / Linear Mixed Model for Compressor Data with an ApplicationHerdahl, Mads January 2008 (has links)
StatoilHydro is the operator of the Åsgard oil and gas field outside of Trøndelag, Norway, where large compressors for injection, recompression and export of natural gas are installed. The facility transports and stores up to 36 millions $Sm^3$ of gas every day. If the compressors are not optimally operated, large values are lost. This paper describes how to use linear mixed models to model the condition of the compressors. The focus has been on the 1- and 2- stage recompression compressors. Reference data from Dresser-Rand have been used to make the model. Head and flow data are the modelled, and the explanatory variables used are molweight, rotational speed and an efficiency indicator. The paper also shows how cross validation is used to give an indication of how future datapoints will fit the model. A graphical user interface has been developed to do estimation and plotting with various models. Different models are tested and compared by likelihood methods. For a relatively simple model using three explanatory variables reasonable predictions are obtained. Results are not so good for very high rotational speeds and high molweights.
|
Page generated in 0.342 seconds