• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Linear Models with Nested Error Structure in Predicting Vision Loss for Patients with Subretinal Neovascular Membranes

Hou, Meiying 08 1900 (has links)
Age-related macular degeneration (AMD)* and presumed ocular histoplasmosis (POHS) are common causes of macular degeneration. Both are major causes of blindness, with AMD being the leading cause of blindness in people over the age of 65. The major cause of visual loss in both categories is the presence of a subretinal neovascular membrane (NVM) in the macular. Sometimes these conditions can be treated successfully with laser therapy. Our task was to develop a regression model for predicting post-treatment vision as a function of time from treatment and baseline prognostic factors measured at diagnosis. The particular analysis of the model was to examine how patients' post-treatment vision is affected by baseline factors. A nested-error structure was used in a linear model. / Thesis / Master of Science (MS)
2

Consequences of Non-Modeled and Modeled Between Case Variation in the Level-1 Error Structure in Multilevel Models for Single-Case Data: A Monte Carlo Study

Baek, Eun Kyeng 01 January 2015 (has links)
The Multilevel modeling (MLM) approach has a great flexibility in that can handle various methodological issues that may arise with single-case studies, such as the need to model possible dependency in the errors, linear or nonlinear trends, and count outcomes (e.g.,Van den Noortgate & Onghena, 2003a). By using the MLM framework, researchers can not only model dependency in the errors but also model a variety of level-1error structures. The effect of misspecification in the level-1 error structure has been well studied for MLM analyses. Generally, it was found that the estimates of the fixed effects were unbiased but the estimates of variance parameters were substantially biased when level-1 error structure was misspecified. However, in previous misspecification studies as well as applied studies of multilevel models with single-case data, a critical assumption has been made. Researchers generally assumed that the level-1 error structure is constant across all participants. It is possible that the level-1 error structure may not be same across participants. Previous studies show that there is a possibility that the level-1 error structure may not be same across participants (Baek & Ferron, 2011; Baek & Ferron, 2013; Maggin et al., 2011). If much variation in level-1 error structure exists, this can possibly impact estimation of the fixed effects and random effects. Despite the importance of this issue, the effects of modeling between-case variation in the level-1 error structure had not yet been systematically studied. The purpose of this simulation study was to extend the MLM modeling in growth curve models to allow the level-1 error structure to vary across cases, and to identify the consequences of modeling and not modeling between-case variation in the level-1 error structure for single-case studies. A Monte Carlo simulation was conducted that examined conditions that varied in series length per case (10 or 20), the number of cases (4 or 8), the true level-1 errors structure (homogenous, moderately heterogeneous, severely heterogeneous), the level-2 error variance in baseline slope and shift in slope (.05 or .2 times the level-1 variance), and the method to analyze the data (allow level-1 error variance and autocorrelation to vary across cases (Model 2) or not allow level-1 error variance and autocorrelation to vary across cases (Model 1)). All simulated data sets were analyzed using Bayesian estimation. For each condition, 1000 data were simulated, and bias, RMSE and credible interval (CI) coverage and width were examined for the fixed treatment effects and the variance components. The results of this study found that the different modeling methods in level-1 error structure had little to no impact on the estimates of the fixed treatment effects, but substantial impacts on the estimates of the variance components, especially the level-1 error standard deviation and the autocorrelation parameters. Modeling between case variation in the level-1 error structure (Model 2) performs relatively better than not modeling between case variation in the level-1 error structure (Model 1) for the estimates of the level-1 error standard deviation and the autocorrelation parameters. It was found that as degree of the heterogeneity in the data (i.e., homogeneous, moderately heterogeneous, severely heterogeneous) increased, the effectiveness of Model 2 increased. The results also indicated that whether the level-1 error structure was under-specified, over-specified, or correctly-specified had little to no impact on the estimates of the fixed treatment effects, but a substantial impact on the level-1 error standard deviation and the autocorrelation. While the correctly-specified and the over-specified models perform fairly well, the under-specified model performs poorly. Moreover, it was revealed that the form of heterogeneity in the data (i.e., one extreme case versus a more even spread of the level-1 variances) might have some impact on relative effectiveness of the two models, but the degree of the autocorrelation had little to no impact on the relative performance of the two models.
3

Error Structure of Randomized Design Under Background Correlation with a Missing Value

Chang, Tseng-Chi 01 May 1965 (has links)
The analysis of variance technique is probably the most popular statistical technique used for testing hypotheses and estimating parameters. Eisenhart presents two classes of problems solvable by the analysis of variance and the assumption underlying each class. Cochran lists the assumptions and also discusses the consequences when these assumptions are not met. It is evident that if all the assumptions are not satisfied, the confidence placed in any result obtained in this manner is adversely affected to varying degrees according to the extent of the violation. One of the assumptions in the analysis of variance procedures is that of uncorrelated errors. The experimenter may not always meet this conditions because of economical or environmental reasons. In fact, Wilk questions the validity of the assumption of uncorrelated errors in any physical situation. For example, consider an experiment over a sequence of years. A correlation due to years may exist, no matter what randomization technique is used, because the outcome of the previous year determines to a great extent the outcome of this year. Another example would be the case of selecting experimental units from the same source, such as, sampling students with the same background or selecting units from the same production process. This points out the fact that the condition such as background, or a defect in the production process may have forced a correlation among the experimental units. Problems of this nature frequently occur in Industrial, Biological, and Psychological experiments.
4

Sensitivity, Noise and Detection of Enzyme Inhibition in Progress Curves

Gutiérrez Arenas, Omar January 2006 (has links)
<p>Starting with the development of an enzymatic assay, where an enzyme in solution hydrolysed a solid-phase bound peptide, a model for the kinetics of enzyme action was introduced. This model allowed the estimation of kinetic parameters and enzyme activity for a system that has the peculiarity of not being saturable with the substrate, but with the enzyme. In a derivation of the model, it was found that the sensitivity of the signal to variations in the enzyme concentration had a transient increase along the reaction progress with a maximum at high substrate conversion levels. </p><p>The same behaviour was derived for the sensitivity in classical homogeneous enzymatic assays and experimental evidence of this was obtained. The impact of the transient increase of the sensitivity on the error structure, and on the ability of homogeneous end-point enzymatic assays to detect competitive inhibition, came into focus. First, a non-monotonous shape in the standard deviation of progress curve data was found and it was attributed to the random dispersion in the enzyme concentration operating through the transient increase in the sensitivity. Second, a model for the detection limit of the quantity Ki/[I] (the IDL-factor) as a function of the substrate conversion level was developed for homogeneous end-point enzymatic assays. </p><p>It was found that the substrate conversion level where the IDL-factor reached an optimum was beyond the initial velocity range. Moreover, at this optimal point not only the ability to detect inhibitors but also the robustness of the assays was maximized. These results may prove to be relevant in drug discovery for optimising end point homogeneous enzymatic assays that are used to find inhibitors against a target enzyme in compound libraries, which are usually big (>10000) and crowded with irrelevant compounds.</p>
5

Evaluation of instantaneous and cumulative models for reactivity ratio estimation with multiresponse scenarios

Zhou, Xiaoqin January 2004 (has links)
Estimating reactivity ratios in multicomponent polymerizations is becoming increasingly important. At the same time, using cumulative models is becoming imperative, as some multicomponent systems are inherently so fast that instantaneous "approximate" models can not be used. In the first part of the thesis, triad fractions (sequence length characteristics) are employed in a multiresponse scenario, investigating different error structures and levels. A comparison is given between instantaneous triad fraction models and instantaneous composition model, which represent the current state-of-the-art. In the second part of the thesis, extensions are discussed with cumulative composition and triad fraction models over the whole conversion range, thus relating the problem of reactivity ratio estimation to the optimal design of experiments (i. e. optimal sampling) over polymerization time and conversion. The performance of cumulative multiresponse models is superior to that of their instantaneous counterparts, which can be explained from an information content point of view. As a side-project, the existence of azeotropic points is investigated in terpolymer (Alfrey-Goldfinger equation) and tetrapolymer (Walling-Briggs equation) systems.
6

Evaluation of instantaneous and cumulative models for reactivity ratio estimation with multiresponse scenarios

Zhou, Xiaoqin January 2004 (has links)
Estimating reactivity ratios in multicomponent polymerizations is becoming increasingly important. At the same time, using cumulative models is becoming imperative, as some multicomponent systems are inherently so fast that instantaneous "approximate" models can not be used. In the first part of the thesis, triad fractions (sequence length characteristics) are employed in a multiresponse scenario, investigating different error structures and levels. A comparison is given between instantaneous triad fraction models and instantaneous composition model, which represent the current state-of-the-art. In the second part of the thesis, extensions are discussed with cumulative composition and triad fraction models over the whole conversion range, thus relating the problem of reactivity ratio estimation to the optimal design of experiments (i. e. optimal sampling) over polymerization time and conversion. The performance of cumulative multiresponse models is superior to that of their instantaneous counterparts, which can be explained from an information content point of view. As a side-project, the existence of azeotropic points is investigated in terpolymer (Alfrey-Goldfinger equation) and tetrapolymer (Walling-Briggs equation) systems.
7

BLOCK DESIGNS UNDER AUTOCORRELATED ERRORS

Shu, Xiaohua January 2011 (has links)
This research work is focused on the balanced and partially balanced incomplete block designs when observations within blocks are correlated. The topic for this dissertation was motivated by a problem in pharmaceutical research, when several treatments are allocated to individuals, and repeated measurements are taken on each individual. In that case, there is correlation among the observations taken on the same individual. Typically, it is reasonable to assume that the observations within individual close to each other are highly correlated than observations that are far away from each other. It is also reasonable to assume that the correlation between any two observations within each individual is same. We have characterized balanced and partially balanced incomplete block designs when observations within blocks are autocorrelated. In Chapter 3, we have provided an explicit expression for the average variance of estimated elementary treatment contrasts for designs obtained by Type I and II series of orthogonal arrays, under autocorrelated errors, and compared them with the corresponding balanced incomplete block designs with uncorrelated errors. The relative efficiency of balanced incomplete block design compared to the corresponding balanced incomplete block design obtained by Types I and II series of orthogonal array under autocorrelated errors does not depend on the number of treatments (v) and is an increasing function of the block size (k). When orthogonal arrays of Type I or Type II do not exist for a given number of treatments, we provided alternative partially balanced designs with autocorrelated errors. In Chapter 4, we rearranged the treatments in each block of symmetric balanced incomplete block designs and used them with autocorrelated error structure of the plots in a block. The C-matrix of estimated treatment effects under autocorrelation was given and the relative efficiency of symmetric balanced incomplete block designs with independent errors compared to the autocorrelated designs is given. In Chapter 5, we discussed the compound symmetry correlation structure within blocks. An explicit expression of the average variance of designs obtained by Type I and II series of orthogonal arrays and symmetric balanced incomplete block designs under compound symmetric errors has been provided and compared them with the corresponding balanced incomplete block designs with uncorrelated errors. Finally, the relative efficiencies of these designs with autocorrelated errors vs. compound symmetric error structure are given / Statistics
8

Sensitivity, Noise and Detection of Enzyme Inhibition in Progress Curves

Gutierrez Arenas, Omar January 2006 (has links)
Starting with the development of an enzymatic assay, where an enzyme in solution hydrolysed a solid-phase bound peptide, a model for the kinetics of enzyme action was introduced. This model allowed the estimation of kinetic parameters and enzyme activity for a system that has the peculiarity of not being saturable with the substrate, but with the enzyme. In a derivation of the model, it was found that the sensitivity of the signal to variations in the enzyme concentration had a transient increase along the reaction progress with a maximum at high substrate conversion levels. The same behaviour was derived for the sensitivity in classical homogeneous enzymatic assays and experimental evidence of this was obtained. The impact of the transient increase of the sensitivity on the error structure, and on the ability of homogeneous end-point enzymatic assays to detect competitive inhibition, came into focus. First, a non-monotonous shape in the standard deviation of progress curve data was found and it was attributed to the random dispersion in the enzyme concentration operating through the transient increase in the sensitivity. Second, a model for the detection limit of the quantity Ki/[I] (the IDL-factor) as a function of the substrate conversion level was developed for homogeneous end-point enzymatic assays. It was found that the substrate conversion level where the IDL-factor reached an optimum was beyond the initial velocity range. Moreover, at this optimal point not only the ability to detect inhibitors but also the robustness of the assays was maximized. These results may prove to be relevant in drug discovery for optimising end point homogeneous enzymatic assays that are used to find inhibitors against a target enzyme in compound libraries, which are usually big (&gt;10000) and crowded with irrelevant compounds.
9

Protection des contenus multimédias pour la certification des données / Protection of multimedia contents for data certification

Lefèvre, Pascal 15 June 2018 (has links)
Depuis plus de vingt ans, l'accès à la technologie est devenu très facile étant donné son omniprésence dans le quotidien de chacun et son faible coût. Cet accès aux technologies du numérique permet à toute personne équipée d'un ordinateur ou d'un smartphone de visualiser et de modifier des contenus digitaux. Avec les progrès en matière de stockage en ligne, la quantité de contenus digitaux tels que le son, l'image ou la vidéo sur internet a explosé et continue d'augmenter.Savoir identifier la source d'une image et certifier si celle-ci a été modifiée ou non sont des informations nécessaires pour authentifier une image et ainsi protéger la propriété intellectuelle et les droits d’auteur par exemple. Une des approches pour résoudre ces problèmes est le tatouage numérique. Il consiste à insérer une marque dans une image qui permettra de l'authentifier.Dans cette thèse, nous étudions premièrement le tatouage numérique dans le but de proposer des méthodes plus robustes aux modifications d'image grâce aux codes correcteurs. En étudiant la structure des erreurs produites par la modification d’une image marquée, un code correcteur sera plus efficace qu’un autre. Nous proposons aussi d’intégrer de nouveaux codes correcteurs appelés codes en métrique rang pour le tatouage.Ensuite, nous proposons d’améliorer l'invisibilité des méthodes de tatouage pour les images couleur. A l’insertion d’une marque, les dégradations de l’image sont perçues différemment par le système visuel humain en fonction de la couleur. Nous proposons un modèle biologique de la perception des couleurs qui nous permet de minimiser les distorsions psychovisuelles de l’image à l’insertion.Toutes ces techniques sont testées sur des images naturelles dans un contexte d’insertion d’information. / For more than twenty years, technology has become more and more easy to access. It is omnipresent in everyday life and is low cost. It allows anyone using a computer or a smartphone to visualize and modify digital contents. Also, with the impressive progress of online massive data storage (cloud), the quantity of digital contents has soared and continues to increase. To ensure the protection of intellectual property and copyright, knowing if an image has been modified or not is an important information in order to authenticate it. One approach to protect digital contents is digital watermarking. It consists in modifying an image to embed an invisible mark which can authenticate the image. In this doctorate thesis, we first study how to improve the robustness of digital image watermarking against image processings thanks to error correcting codes. By studying the error structure produced by the image processing applied on a watermarked image, we can find an optimal choice of error correcting code for the best correction performances. Also, we propose to integrate a new type of error correcting codes called rank metric codes for watermarking applications. Then, we propose to improve the invisibility of color image watermarking methods. At the embedding step, a host image suffers some distortions which are perceived differently in function of the color by the human visual system. We propose a biological model of color perception which allows one to minimize psychovisual distortions applied on the image to protect.

Page generated in 0.0585 seconds