• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 8
  • 7
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Redesign of Alpha Class Glutathione Transferases to Study Their Catalytic Properties

Nilsson, Lisa O January 2001 (has links)
A number of active site mutants of human Alpha class glutathione transferase A1-1 (hGST A1-1) were made and characterized to determine the structural determinants for alkenal activity. The choice of mutations was based on primary structure alignments of hGST A1-1 and the Alpha class enzyme with the highest alkenal activity, hGST A4-4, from three different species and crystal structure comparisons between the human enzymes. The result was an enzyme with a 3000-fold change in substrate specificity for nonenal over 1-chloro-2,4-dinitrobenzene (CDNB). The C-terminus of the Alpha class enzymes is an α-helix that folds over the active site upon substrate binding. The rate-determining step is product release, which is influenced by the movements of the C-terminus, thereby opening the active site. Phenylalanine 220, near the end of the C-terminus, forms an aromatic cluster with tyrosine 9 and phenylalanine 10, positioning the β-carbon of the cysteinyl moiety of glutathione. The effects of phenylalanine 220 mutations on the mobility of the C-terminus were studied by the viscosity dependence of kcat and kcat/Km with glutathione and CDNB as the varied substrates. The compatibility of slightly different subunit interfaces within the Alpha class has been studied by heterodimerization between monomers from hGST A1-1 and hGST A4-4. The heterodimer was temperature sensitive, and rehybridized into homodimers at 40 ˚C. The heterodimers did not show strictly additive activities with alkenals and CDNB. This result combined with further studies indicates that there are factors at the subunit interface influencing the catalytic properties of hGST A1-1.
22

Sequential experimental design under competing prior knowledge

Vastola, Justin Timothy 11 December 2012 (has links)
This research focuses on developing a comprehensive framework for designing and modeling experiments in the presence of multiple sources of competing prior knowledge. In particular, methodology is proposed for process optimization in high-cost, low-resource experimental settings where the underlying response function can be highly non-linear. In the first part of this research, an initial experimental design criteria is proposed for optimization problems by combining multiple, potentially competing, sources of prior information--engineering models, expert opinion, and data from past experimentation on similar, non-identical systems. New methodology is provided for incorporating and combining conjectured models and data into both the initial modeling and design stages. The second part of this research focuses on the development of a batch sequential design procedure for optimizing high-cost, low-resource experiments with complicated response surfaces. The success in the proposed approach lies in melding a flexible, sequential design algorithm with a powerful local modeling approach. Batch experiments are designed sequentially to adapt to balance space-filling properties and the search for the optimal operating condition. Local model calibration and averaging techniques are introduced to easily allow incorporation of statistical models and engineering knowledge, even if such knowledge pertains to only subregions of the complete design space. The overall process iterates between adapting designs, adapting models, and updating engineering knowledge over time. Applications to nanomanufacturing are provided throughout.
23

A Systematic Process for Adaptive Concept Exploration

Nixon, Janel Nicole 29 November 2006 (has links)
This thesis presents a method for streamlining the process of obtaining and interpreting quantitative data for the purpose of creating a low-fidelity modeling and simulation environment. By providing a more efficient means for obtaining such information, quantitative analyses become much more practical for decision-making in the very early stages of design, where traditionally, quants are viewed as too expensive and cumbersome for concept evaluation. The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information. The results show that the combination of a tailored data set, and an informed model structure work together to provide a meaningful quantitative representation of the system while relying on only a small amount of resources to generate that information. In comparison to more traditional modeling and simulation approaches, the SPACE method provides a more accurate representation of the system using fewer resources to generate that representation. For this reason, the SPACE method acts as an enabler for decision making in the very early design stages, where the desire is to base design decisions on quantitative information while not wasting valuable resources obtaining unnecessary high fidelity information about all the candidate solutions. Thus, the approach enables concept selection to be based on parametric, quantitative data so that informed, unbiased decisions can be made.
24

Asymptotic theory for decentralized sequential hypothesis testing problems and sequential minimum energy design algorithm

Wang, Yan 19 May 2011 (has links)
The dissertation investigates asymptotic theory of decentralized sequential hypothesis testing problems as well as asymptotic behaviors of the Sequential Minimum Energy Design (SMED). The main results are summarized as follows. 1.We develop the first-order asymptotic optimality theory for decentralized sequential multi-hypothesis testing under a Bayes framework. Asymptotically optimal tests are obtained from the class of "two-stage" procedures and the optimal local quantizers are shown to be the "maximin" quantizers that are characterized as a randomization of at most M-1 Unambiguous Likelihood Quantizers (ULQ) when testing M >= 2 hypotheses. 2. We generalize the classical Kullback-Leibler inequality to investigate the quantization effects on the second-order and other general-order moments of log-likelihood ratios. It is shown that a quantization may increase these quantities, but such an increase is bounded by a universal constant that depends on the order of the moment. This result provides a simpler sufficient condition for asymptotic theory of decentralized sequential detection. 3. We propose a class of multi-stage tests for decentralized sequential multi-hypothesis testing problems, and show that with suitably chosen thresholds at different stages, it can hold the second-order asymptotic optimality properties when the hypotheses testing problem is "asymmetric." 4. We characterize the asymptotic behaviors of SMED algorithm, particularly the denseness and distributions of the design points. In addition, we propose a simplified version of SMED that is computationally more efficient.
25

Planification d’expériences numériques en multi-fidélité : Application à un simulateur d’incendies / Sequential design of numerical experiments in multi-fidelity : Application to a fire simulator

Stroh, Rémi 26 June 2018 (has links)
Les travaux présentés portent sur l'étude de modèles numériques multi-fidèles, déterministes ou stochastiques. Plus précisément, les modèles considérés disposent d'un paramètre réglant la qualité de la simulation, comme une taille de maille dans un modèle par différences finies, ou un nombre d'échantillons dans un modèle de Monte-Carlo. Dans ce cas, il est possible de lancer des simulations basse fidélité, rapides mais grossières, et des simulations haute fidélité, fiables mais coûteuses. L'intérêt d'une approche multi-fidèle est de combiner les résultats obtenus aux différents niveaux de fidélité afin d'économiser du temps de simulation. La méthode considérée est fondée sur une approche bayésienne. Le simulateur est décrit par un modèle de processus gaussiens multi-niveaux développé dans la littérature que nous adaptons aux cas stochastiques dans une approche complètement bayésienne. Ce méta-modèle du simulateur permet d'obtenir des estimations de quantités d'intérêt, accompagnés d'une mesure de l'incertitude associée. L'objectif est alors de choisir de nouvelles expériences à lancer afin d'améliorer les estimations. En particulier, la planification doit sélectionner le niveau de fidélité réalisant le meilleur compromis entre coût d'observation et gain d'information. Pour cela, nous proposons une stratégie séquentielle adaptée au cas où les coûts d'observation sont variables. Cette stratégie, intitulée "Maximal Rate of Uncertainty Reduction" (MRUR), consiste à choisir le point d'observation maximisant le rapport entre la réduction d'incertitude et le coût. La méthodologie est illustrée en sécurité incendie, où nous cherchons à estimer des probabilités de défaillance d'un système de désenfumage. / The presented works focus on the study of multi-fidelity numerical models, deterministic or stochastic. More precisely, the considered models have a parameter which rules the quality of the simulation, as a mesh size in a finite difference model or a number of samples in a Monte-Carlo model. In that case, the numerical model can run low-fidelity simulations, fast but coarse, or high-fidelity simulations, accurate but expensive. A multi-fidelity approach aims to combine results coming from different levels of fidelity in order to save computational time. The considered method is based on a Bayesian approach. The simulator is described by a state-of-art multilevel Gaussian process model which we adapt to stochastic cases in a fully-Bayesian approach. This meta-model of the simulator allows estimating any quantity of interest with a measure of uncertainty. The goal is to choose new experiments to run in order to improve the estimations. In particular, the design must select the level of fidelity meeting the best trade-off between cost of observation and information gain. To do this, we propose a sequential strategy dedicated to the cases of variable costs, called Maximum Rate of Uncertainty Reduction (MRUR), which consists of choosing the input point maximizing the ratio between the uncertainty reduction and the cost. The methodology is illustrated in fire safety science, where we estimate probabilities of failure of a fire protection system.
26

Plans prédictifs à taille fixe et séquentiels pour le krigeage / Fixed-size and sequential designs for kriging

Abtini, Mona 30 August 2018 (has links)
La simulation numérique est devenue une alternative à l’expérimentation réelle pour étudier des phénomènes physiques. Cependant, les phénomènes complexes requièrent en général un nombre important de simulations, chaque simulation étant très coûteuse en temps de calcul. Une approche basée sur la théorie des plans d’expériences est souvent utilisée en vue de réduire ce coût de calcul. Elle consiste à partir d’un nombre réduit de simulations, organisées selon un plan d’expériences numériques, à construire un modèle d’approximation souvent appelé métamodèle, alors beaucoup plus rapide à évaluer que le code lui-même. Traditionnellement, les plans utilisés sont des plans de type Space-Filling Design (SFD). La première partie de la thèse concerne la construction de plans d’expériences SFD à taille fixe adaptés à l’identification d’un modèle de krigeage car le krigeage est un des métamodèles les plus populaires. Nous étudions l’impact de la contrainte Hypercube Latin (qui est le type de plans les plus utilisés en pratique avec le modèle de krigeage) sur des plans maximin-optimaux. Nous montrons que cette contrainte largement utilisée en pratique est bénéfique quand le nombre de points est peu élevé car elle atténue les défauts de la configuration maximin-optimal (majorité des points du plan aux bords du domaine). Un critère d’uniformité appelé discrépance radiale est proposé dans le but d’étudier l’uniformité des points selon leur position par rapport aux bords du domaine. Ensuite, nous introduisons un proxy pour le plan minimax-optimal qui est le plan le plus proche du plan IMSE (plan adapté à la prédiction par krigeage) et qui est coûteux en temps de calcul, ce proxy est basé sur les plans maximin-optimaux. Enfin, nous présentons une procédure bien réglée de l’optimisation par recuit simulé pour trouver les plans maximin-optimaux. Il s’agit ici de réduire au plus la probabilité de tomber dans un optimum local. La deuxième partie de la thèse porte sur un problème légèrement différent. Si un plan est construit de sorte à être SFD pour N points, il n’y a aucune garantie qu’un sous-plan à n points (n 6 N) soit SFD. Or en pratique le plan peut être arrêté avant sa réalisation complète. La deuxième partie est donc dédiée au développement de méthodes de planification séquentielle pour bâtir un ensemble d’expériences de type SFD pour tout n compris entre 1 et N qui soient toutes adaptées à la prédiction par krigeage. Nous proposons une méthode pour générer des plans séquentiellement ou encore emboités (l’un est inclus dans l’autre) basée sur des critères d’information, notamment le critère d’Information Mutuelle qui mesure la réduction de l’incertitude de la prédiction en tout point du domaine entre avant et après l’observation de la réponse aux points du plan. Cette approche assure la qualité des plans obtenus pour toutes les valeurs de n, 1 6 n 6 N. La difficulté est le calcul du critère et notamment la génération de plans en grande dimension. Pour pallier ce problème une solution a été présentée. Cette solution propose une implémentation astucieuse de la méthode basée sur le découpage par blocs des matrices de covariances ce qui la rend numériquement efficace. / In recent years, computer simulation models are increasingly used to study complex phenomena. Such problems usually rely on very large sophisticated simulation codes that are very expensive in computing time. The exploitation of these codes becomes a problem, especially when the objective requires a significant number of evaluations of the code. In practice, the code is replaced by global approximation models, often called metamodels, most commonly a Gaussian Process (kriging) adjusted to a design of experiments, i.e. on observations of the model output obtained on a small number of simulations. Space-Filling-Designs which have the design points evenly spread over the entire feasible input region, are the most used designs. This thesis consists of two parts. The main focus of both parts is on construction of designs of experiments that are adapted to kriging, which is one of the most popular metamodels. Part I considers the construction of space-fillingdesigns of fixed size which are adapted to kriging prediction. This part was started by studying the effect of Latin Hypercube constraint (the most used design in practice with the kriging) on maximin-optimal designs. This study shows that when the design has a small number of points, the addition of the Latin Hypercube constraint will be useful because it mitigates the drawbacks of maximin-optimal configurations (the position of the majority of points at the boundary of the input space). Following this study, an uniformity criterion called Radial discrepancy has been proposed in order to measure the uniformity of the points of the design according to their distance to the boundary of the input space. Then we show that the minimax-optimal design is the closest design to IMSE design (design which is adapted to prediction by kriging) but is also very difficult to evaluate. We then introduce a proxy for the minimax-optimal design based on the maximin-optimal design. Finally, we present an optimised implementation of the simulated annealing algorithm in order to find maximin-optimal designs. Our aim here is to minimize the probability of falling in a local minimum configuration of the simulated annealing. The second part of the thesis concerns a slightly different problem. If XN is space-filling-design of N points, there is no guarantee that any n points of XN (1 6 n 6 N) constitute a space-filling-design. In practice, however, we may have to stop the simulations before the full realization of design. The aim of this part is therefore to propose a new methodology to construct sequential of space-filling-designs (nested designs) of experiments Xn for any n between 1 and N that are all adapted to kriging prediction. We introduce a method to generate nested designs based on information criteria, particularly the Mutual Information criterion. This method ensures a good quality forall the designs generated, 1 6 n 6 N. A key difficulty of this method is that the time needed to generate a MI-sequential design in the highdimension case is very larg. To address this issue a particular implementation, which calculates the determinant of a given matrix by partitioning it into blocks. This implementation allows a significant reduction of the computational cost of MI-sequential designs, has been proposed.
27

自變數有測量誤差的羅吉斯迴歸模型之序貫設計探討及其在教育測驗上的應用 / Sequential Designs with Measurement Errors in Logistic Models with Applications to Educational Testing

盧宏益, Lu, Hung-Yi Unknown Date (has links)
本論文探討當自變數存在測量誤差時,羅吉斯迴歸模型的估計問題,並將此結果應用在電腦化適性測驗中的線上校準問題。在變動長度電腦化測驗的假設下,我們證明了估計量的強收斂性。試題反應理論被廣泛地使用在電腦化適性測驗上,其假設受試者在試題的表現情形與本身的能力,可以透過試題特徵曲線加以詮釋,羅吉斯迴歸模式是最常見的試題反應模式。藉由適性測驗的施行,考題的選取可以依據不同受試者,選擇最適合的題目。因此,相較於傳統測驗而言,在適性測驗中,題目的消耗量更為快速。在題庫的維護與管理上,新試題的補充與試題校準便為非常重要的工作。線上試題校準意指在線上測驗進行中,同時進行試題校準。因此,受試者的能力估計會存在測量誤差。從統計的觀點,線上校準面臨的困難,可以解釋為在非線性模型下,當自變數有測量誤差時的實驗設計問題。我們利用序貫設計降低測量誤差,得到更精確的估計,相較於傳統的試題校準,可以節省更多的時間及成本。我們利用處理測量誤差的技巧,進一步應用序貫設計的方法,處理在線上校準中,受試者能力存在測量誤差的問題。 / In this dissertation, we focus on the estimate in logistic regression models when the independent variables are subject to some measurement errors. The problem of this dissertation is motivated by online calibration in Computerized Adaptive Testing (CAT). We apply the measurement error model techniques and adaptive sequential design methodology to the online calibration problem of CAT. We prove that the estimates of item parameters are strongly consistent under the variable length CAT setup. In an adaptive testing scheme, examinees are presented with different sets of items chosen from a pre-calibrated item pool. Thus the speed of attrition in items will be very fast, and replenishing of item pool is essential for CAT. The online calibration scheme in CAT refers to estimating the item parameters of new, un-calibrated items by presenting them to examinees during the course of their ability testing together with previously calibrated items. Therefore, the estimated latent trait levels of examinees are used as the design points for estimating the parameter of the new items, and naturally these designs, the estimated latent trait levels, are subject to some estimating errors. Thus the problem of the online calibration under CAT setup can be formulated as a sequential estimation problem with measurement errors in the independent variables, which are also chosen sequentially. Item Response Theory (IRT) is the most commonly used psychometric model in CAT, and the logistic type models are the most popular models used in IRT based tests. That's why the nonlinear design problem and the nonlinear measurement error models are involved. Sequential design procedures proposed here can provide more accurate estimates of parameters, and are more efficient in terms of sample size (number of examinees used in calibration). In traditional calibration process in paper-and-pencil tests, we usually have to pay for the examinees joining the pre-test calibration process. In online calibration, there will be less cost, since we are able to assign new items to the examinees during the operational test. Therefore, the proposed procedures will be cost-effective as well as time-effective.
28

Development and validation of an integrated model for evaluating e-service quality, usability and user experience (e-SQUUX) of Web-based applications in the context of a University web portal

Ssemugabi, Samuel 01 1900 (has links)
Text in English / Developments in Internet technology and pervasive computing over the past two and half decades have resulted in a variety of Web-based applications (WBAs) that provide products and services to online users or customers. The Internet is used not only to transfer information via the web but is increasingly used to provide electronic services including business transactions, information-delivery and social networking, as well as e-government, e-health and e-learning. For such organisations, e-service quality, usability and user experience are considered to be critical determinants of their products’ or services’ success. Many studies to model these three concepts separately have been undertaken as part of broader studies of software quality or service quality modelling. However, to the current researcher’s knowledge, none of the studies have focussed on proposing an evaluation model that integrates and combines the three of them. This research is an effort to fill that gap. The primary purpose of this mixed-methods research was to develop a conceptual integrated model for evaluating e-service quality, usability and user experience (e-SQUUX) of WBAs and then contextualise it to evaluation of a University web portal (UWP). This was undertaken using an exploratory sequential research design. During a qualitative phase, an extensive extensive systematic literature review of 264 relevant sources relating to dimensions of e-service quality, usability and user experience, was undertaken to derive an integrated conceptual e-service quality, usability and user experience (e-SQUUX) Model for evaluating WBAs. The model was then empirically refined through a sequential series of validations, thus developing various versions of the e-SQUUX Model. First, it was content validated by a set of four expert reviewers. Second, during the quantitative phase, in the context of a University web portal, a questionnaire survey was conducted that included a comprehensive pilot study with 29 partipants, prior to the main survey. The main survey data from 174 particiapants was used to determine a validated model, using Exploratory factor analysis (EFA), followed by producing a structural model, using partial least square – structural equation modelling (PLS-SEM). This version consisted of the components of the final e-SQUUX Model. Consequently, the research enriches the body of knowledge on IS and HCI by providing the e-SQUUX Model as an evaluation tool. For designers, developers and managers of UWPs, the model serves as a customisable set of evaluation criteria and also provides specific recommendations for design. In line with the Exploratory sequential design of mixed methods research, the findings of the qualitative work in this research influenced the subsequent quantitative study, since the potential Likert-scale questionnaire items were derived from the definitions and meanings of the components that emanated from the qualitative phase of the study. Consequently, this research is an exemplar for developing an integrated evaluation model for specific facets or domains, and of its application in a particular context, in this case, a University web portal. Keywords: e-service quality, usability, user experience, evaluation model, integrated model, exploratory factor analysis, partial least square – structural equation modelling (PLS-SEM), mixed methods research, Exploratory sequential design, quantitative study, qualitative study, validation, Web-based applications, University web portal / Information System / Ph D. (Information Systems)

Page generated in 0.0829 seconds