• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 23
  • 9
  • 6
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 169
  • 169
  • 35
  • 27
  • 27
  • 25
  • 24
  • 23
  • 23
  • 22
  • 20
  • 19
  • 17
  • 16
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

The Asymptotic Loss of Information for Grouped Data

Felsenstein, Klaus, Pötzelberger, Klaus January 1995 (has links) (PDF)
We study the loss of information (measured in terms of the Kullback- Leibler distance) caused by observing "grouped" data (observing only a discretized version of a continuous random variable). We analyse the asymptotical behaviour of the loss of information as the partition becomes finer. In the case of a univariate observation, we compute the optimal rate of convergence and characterize asymptotically optimal partitions (into intervals). In the multivariate case we derive the asymptotically optimal regular sequences of partitions. Forthermore, we compute the asymptotically optimal transformation of the data, when a sequence of partitions is given. Examples demonstrate the efficiency of the suggested discretizing strategy even for few intervals. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
52

Nonlinear Mixed Effects Methods for Improved Estimation of Receptor Occupancy in PET Studies

Kågedal, Matts January 2014 (has links)
Receptor occupancy assessed by Positron Emission Tomography (PET) can provide important translational information to help bridge information from one drug to another or from animal to man. The aim of this thesis was to develop nonlinear mixed effects methods for estimation of the relationship between drug exposure and receptor occupancy for the two mGluR5 antagonists AZD9272 and AZD2066 and for the 5HT1B receptor antagonist AZD3783. Also the optimal design for improved estimation of the relationship between drug exposure and receptor occupancy as well as for improved dose finding in neuropathic pain treatment, was investigated. Different modeling approaches were applied. For AZD9272, the radioligand kinetics and receptor occupancy was simultaneously estimated using arterial concentrations as input function and including two brain regions of interest. For AZD2066, a model was developed where brain/plasma partition coefficients from ten different brain regions were included simultaneously as observations. For AZD3783, the simplified reference tissue model was extended to allow different non-specific binding in the reference region and brain regions of interest and the possibility of using white matter as reference was also evaluated. The optimal dose-selection for improved precision of receptor occupancy as well as for improved precision of the minimum effective dose of a neuropathic pain treatment was assessed, using the D-optimal as well as the Ds-optimal criteria. Simultaneous modelling of radioligand and occupancy provided a means to avoid simplifications or approximations and provided the possibility to tests or to relax assumptions. Inclusion of several brain regions of different receptor density simultaneously in the analysis, markedly improved the precision of the affinity parameter. Higher precision was achieved in relevant parameters with designs based on the Ds compared to the D-optimal criterion. The optimal design for improved precision of the relationship between dose and receptor occupancy depended on the number of brain regions and the receptor density of these regions. In conclusion, this thesis presents novel non-linear mixed effects models estimating the relationship between drug exposure and receptor occupancy, providing useful translational information, allowing for a better informed drug-development.
53

Novel Pharmacometric Methods for Design and Analysis of Disease Progression Studies

Ueckert, Sebastian January 2014 (has links)
With societies aging all around the world, the global burden of degenerative diseases is expected to increase exponentially. From the perspective drug development, degenerative diseases represent an especially challenging class. Clinical trials, in this context often termed disease progression studies, are long, costly, require many individuals, and have low success rates. Therefore, it is crucial to use informative study designs and to analyze efficiently the obtained trial data. The development of novel approaches intended towards facilitating both the design and the analysis of disease progression studies was the aim of this thesis. This aim was pursued in three stages (i) the characterization and extension of pharmacometric software, (ii) the development of new methodology around statistical power, and (iii) the demonstration of application benefits. The optimal design software PopED was extended to simplify the application of optimal design methodology when planning a disease progression study. The performance of non-linear mixed effect estimation algorithms for trial data analysis was evaluated in terms of bias, precision, robustness with respect to initial estimates, and runtime. A novel statistic allowing for explicit optimization of study design for statistical power was derived and found to perform superior to existing methods. Monte-Carlo power studies were accelerated through application of parametric power estimation, delivering full power versus sample size curves from a few hundred Monte-Carlo samples. Optimal design and an explicit optimization for statistical power were applied to the planning of a study in Alzheimer's disease, resulting in a 30% smaller study size when targeting 80% power. The analysis of ADAS-cog score data was improved through application of item response theory, yielding a more exact description of the assessment score, an increased statistical power and an enhanced insight in the assessment properties. In conclusion, this thesis presents novel pharmacometric methods that can help addressing the challenges of designing and planning disease progression studies.
54

Robust designs for field experiments with blocks

Mann, Rena Kaur 28 July 2011 (has links)
This thesis focuses on the design of field experiments with blocks to study treatment effects for a number of treatments. Small field plots are available but located in several blocks and each plot is assigned to a treatment in the experiment. Due to spatial correlation among the plots, the allocation of the treatments to plots has influence on the analysis of the treatment effects. When the spatial correlation is known, optimal allocations (designs) of the treatments to plots have been studied in the literature. However, the spatial correlation is usually unknown in practice, so we propose a robust criterion to study optimal designs of the treatments to plots. Neighbourhoods of correlation structures are introduced and a modified generalized least squares estimator is discussed. A simulated annealing algorithm is implemented to compute optimal/robust designs. Various results are obtained for different experimental settings. Some theoretical results are also proved in the thesis. / Graduate
55

Computerized achievement tests : sequential and fixed length tests

Wiberg, Marie H. January 2003 (has links)
The aim of this dissertation is to describe how a computerized achivement test can be constructed and used in practice. Throughout this dissertation the focus is on classifying the examinees into masters and non-masters depending on their ability. However, there has been no attempt to estimate their ability. In paper I, a criterion-referenced computerized test with a fixed number of items is expressed as a statistical inference problem. The theory of optimal design is used to find the test that has the strongest power. A formal proof is provided showing that all items should have the same item characteristics, viz. high discrimination, low guessing and difficulty near the cutoff score, in order to give us the most powerful statistical test. An efficiency study shows how many times more non-optimal items are needed if we do not use optimal items in order to achieve the same power in the test. In paper II, a computerized mastery sequential test is examined using sequential analysis. The focus is on examining the sequential probability ratio test and to minimize the number of items in a test, i.e. to minimize the average sample number function, abbreviated as the ASN function. Conditions under which the ASN function decreases are examined. Further, it is shown that the optimal values are the same for item discrimination and item guessing, but differ for item difficulty compared with tests with fixed number of items. Paper III presents three simulation studies of sequential computerized mastery tests. Three cases are considered, viz. the examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The simulations indicate that the observed results from the operating characteristic function differ significantly from the theoretical results. The mean number of items in a test, the distribution of test length and the variance depend on whether the true values of the item characteristics are known and whether they are iid or not. In paper IV computerized tests with both pretested items with known item parameters, and try-out items with unknown item parameters are considered. The aim is to study how the item parameters for try-out items can be estimated in a computerized test. Although the unknown examinees' abilities may act as nuisance parameters, the asymptotic variance of the item parameter estimators can be calculated. Examples show that a more reliable variance estimator yields much larger estimates of the variance than commonly used variance estimators.
56

Practical Optimal Experimental Design in Drug Development and Drug Treatment using Nonlinear Mixed Effects Models

Nyberg, Joakim January 2011 (has links)
The cost of releasing a new drug on the market has increased rapidly in the last decade. The reasons for this increase vary with the drug, but the need to make correct decisions earlier in the drug development process and to maximize the information gained throughout the process is evident. Optimal experimental design (OD) describes the procedure of maximizing relevant information in drug development and drug treatment processes. While various optimization criteria can be considered in OD, the most common is to optimize the unknown model parameters for an upcoming study. To date, OD has mainly been used to optimize the independent variables, e.g. sample times, but it can be used for any design variable in a study. This thesis addresses the OD of multiple continuous or discrete design variables for nonlinear mixed effects models. The methodology for optimizing and the optimization of different types of models with either continuous or discrete data are presented and the benefits of OD for such models are shown. A software tool for optimizing these models in parallel is developed and three OD examples are demonstrated: 1) optimization of an intravenous glucose tolerance test resulting in a reduction in the number of samples by a third, 2) optimization of drug compound screening experiments resulting in the estimation of nonlinear kinetics and 3) an individual dose-finding study for the treatment of children with ciclosporin before kidney transplantation resulting in a reduction in the number of blood samples to ~27% of the original number and an 83% reduction in the study duration. This thesis uses examples and methodology to show that studies in drug development and drug treatment can be optimized using nonlinear mixed effects OD. This provides a tool than can lower the cost and increase the overall efficiency of drug development and drug treatment.
57

Optimal (Adaptive) Design and Estimation Performance in Pharmacometric Modelling

Maloney, Alan January 2012 (has links)
The pharmaceutical industry now recognises the importance of the newly defined discipline of pharmacometrics. Pharmacometrics uses mathematical models to describe and then predict the performance of new drugs in clinical development. To ensure these models are useful, the clinical studies need to be designed such that the data generated allows the model predictions to be sufficiently accurate and precise. The capability of the available software to reliably estimate the model parameters must also be well understood.  This thesis investigated two important areas in pharmacometrics: optimal design and software estimation performance. The three optimal design papers progressed significant areas of optimal design research, especially relevant to phase II dose response designs. The use of exposure, rather than dose, was investigated within an optimal design framework. In addition to using both optimal design and clinical trial simulation, this work employed a wide range of metrics for assessing design performance, and was illustrative of how optimal designs for exposure response models may yield dose selections quite different to those based on standard dose response models. The investigation of the optimal designs for Poisson dose response models demonstrated a novel mathematical approach to the necessary matrix calculations for non-linear mixed effects models. Finally, the enormous potential of using optimal adaptive designs over fixed optimal designs was demonstrated. The results showed how the adaptive designs were robust to initial parameter misspecification, with the capability to "learn" the true dose response using the accruing subject data. The two estimation performance papers investigated the relative performance of a number of different algorithms and software programs for two complex pharmacometric models. In conclusion these papers, in combination, cover a wide spectrum of study designs for non-linear dose/exposure response models, covering: normal/non-normal data, fixed/mixed effect models, single/multiple design criteria metrics, optimal design/clinical trial simulation, and adaptive/fixed designs.
58

Modélisation et conception optimale d’un moteur linéaire à induction pour système de traction ferroviaire / Modeling and optimal design of a linear induction motor for railway system

Gong, Jinlin 21 October 2011 (has links)
Cette thèse porte sur l’étude des performances du moteur linéaire de référence par la méthode d’analyse éléments finis, et la conception optimale sur un modèle lourd en temps de calcul. La méthode éléments finis est utilisée pour étudier les performances du moteur linéaire de référence, car le modèle analytique d’un moteur linéaire est difficile à construire dû aux effets d’extrémités. Le modèle éléments finis (MEF) 2D permet de prendre en compte l’effet d’extrémité de longueur finie. L’effet d’extrémité de largeur finie est intégré au modèle 2D en faisant varier la conductivité du secondaire et en ajoutant une inductance de tête de bobines. Ensuite, le couplage entre le MEF 3D magnétique et thermique permet de prendre en compte tous les effets d’extrémité et de l’influence de la température. Un banc d’essais est construit pour valider les modélisations éléments finis. La comparaison entre les différents modèles montre l’importance du modèle couplé. L’optimisation directe sur un MEF est très couteuse en temps de calcul. Les stratégies d’intégrer un modèle de substitution au lieu d’un MEF sont étudiées. L’optimisation directe sur un modèle de substitution et l’algorithme Efficient Global Optimisation (EGO) sont comparés. Un algorithme Space Mapping (SM) 3 niveaux est proposé. Les résultats des cas tests montrent que le SM 3 niveaux est plus efficace par rapport à l’algorithme SM 2 niveaux. Une nouvelle stratégie d’optimisation avec un faible budget d’évaluation du MEF est proposée et testée dans le contexte d’une modélisation difficile. La stratégie proposée permet d’évaluer les MEF en parallèle, ainsi permet un gain considérable de réduction du temps d’optimisation. / This thesis focuses on studying the performance of the linear induction motor using the method of finite element analysis, and the optimal design on a time-costly model. The finite element method is used to study the performance of the linear induction motor. Firstly, the 2D finite element model (FEM) is constructed, which allows taking into account the longitudinal end effects. The transverse edge effects are taken into account within 2D model by varying the conductivity of the secondary and by adding the inductance of the winding overhang. Secondly, a coupled model between the magnetic and thermal 3D FEM is built which allows taking into account both the end effects and the temperature influence. Finally, a test bench is realized to validate the models. The comparison between the different models shows the importance of the coupled model. Optimal design using finite element modeling tools is a complex task and also time-costly. The surrogate model-assisted optimization strategies are studied. The direct surrogate model-assisted optimization and the Efficient Global Optimization are compared. A three-level output space-mapping technique is proposed to reduce the computation time. The optimization results show that the proposed algorithm allows saving a substantial computation time compared to the classical two level output space-mapping. Using the 3D FEM, a multi-objective optimization with a progressive improvement of a surrogate model is proposed. The proposed strategy evaluates the FEM in parallel. A 3D Pareto front composed of the finite element model evaluation results is obtained, which allows taking the decision for the engineering design.
59

Destilação extrativa de etanol utilizando glicerol - modelagem termodinâmica, otimização e determinação de uma configuração ótima

Mezzomo, Henrique January 2014 (has links)
Etanol é um dos combustíveis renováveis mais importantes e contribui com a redução dos impactos negativos causados pela utilização de combustíveis fósseis por todo o mundo. É obtido principalmente pela fermentação dos açúcares provenientes da cana-de-açúcar e do milho. O produto da fermentação possui aproximadamente 96,5% molar de água, e um dos desafios é a obtenção econômica de um produto com pureza acima dos 99% molar em etanol para a utilização no setor de transporte. O presente trabalho tem por objetivo a otimização do processo de destilação extrativa do etanol utilizando glicerol como agente extrator. Esse solvente é um subproduto no processo de produção do diesel renovável, e estudou-se sua viabilidade como substituto do solvente derivado de fontes naturais não-renováveis, etileno glicol. Vinte e duas diferentes configurações de colunas de destilação simples e complexas foram avaliadas nesta investigação. O recente modelo de coeficientes de atividade F-SAC foi ajustado para a melhor representação de dados de equilíbrio líquido-vapor e de coeficiente de atividade em diluição infinita coletados na literatura. A predição do modelo F-SAC foi superior comparando-se a outros modelos de atividade. A média na diferença absoluta, quando comparado ao modelo NRTL chegou a valores aproximadamente 47% menores. O modelo do processo foi construído em um simulador baseado em equações, onde balanços de massa e de energia são resolvidas simultaneamente, buscando possíveis alterações para a redução do consumo energético e aumento na produtividade. A influência dos principais parâmetros do processo foi avaliada via simulações e descobriu-se que uma configuração e operação ótimas do sistema por destilação extrativa podem gerar significativa redução no consumo energético do processo. A economia em termos energéticos pode atingir valores de até 10% quando comparados com a melhor configuração disponível na literatura. / Ethanol is one of the most important renewable fuels and contributes to reducing the negative impacts caused by the use of fossil fuels worldwide. It is mainly obtained by the fermentation of sugars from sugar cane and corn. The fermentation broth has approximately 96.5% of water molar, and an economic challenge is to obtain a product with purity above 99% of ethanol molar to use in the transportation sector. The present work aims at optimizing the process of extractive distillation of ethanol using glycerol as extracting agent. This solvent is a byproduct in the renewable diesel production and was then studied as an alternative for ethylene glycol, the curently used non-renewable solvent. Twenty-two different configurations of simple and complex column sequences were evaluated in this investigation. The recent F-SAC activity coefficient model was adjusted to the best representation of vapor-liquid equilibrium and infinite dilution activity coefficient data from the literature. The prediction of the F-SAC model was superior when compared with other activity coefficient models. The average absolute difference was up to 47% smaller when compared with the NRTL model. The process model was built on an equation-based simulator, where mass and energy balances are solved simultaneously, looking for possible changes to reduce the energy demands and raise the production. The influence of the main process parameters was evaluated via simulations and we have found that an optimal operation of the system by extractive distillation with glycerol can lead to significant reduction in the energy consumption of the process. The energy savings could reach values up to 10% when compared with the best configuration available in the literature using ethylene glycol as entrainer.
60

Assessing Nonlinear Relationships through Rich Stimulus Sampling in Repeated-Measures Designs

Cole, James Jacob 01 August 2018 (has links)
Explaining a phenomenon often requires identification of an underlying relationship between two variables. However, it is common practice in psychological research to sample only a few values of an independent variable. Young, Cole, and Sutherland (2012) showed that this practice can impair model selection in between-subject designs. The current study expands that line of research to within-subjects designs. In two Monte Carlo simulations, model discrimination under systematic sampling of 2, 3, or 4 levels of the IV was compared with that under random uniform sampling and sampling from a Halton sequence. The number of subjects, number of observations per subject, effect size, and between-subject parameter variance in the simulated experiments were also manipulated. Random sampling out-performed the other methods in model discrimination with only small, function-specific costs to parameter estimation. Halton sampling also produced good results but was less consistent. The systematic sampling methods were generally rank-ordered by the number of levels they sampled.

Page generated in 0.0503 seconds