Spelling suggestions: "subject:"model refinement"" "subject:"model definement""
1 |
Refinement of reduced protein models with all-atom force fieldsWróblewska, Liliana 14 November 2007 (has links)
The goal of the following thesis research was to develop a systematic approach for the refinement of low-resolution protein models, as a part of the protein structure prediction procedure. Significant progress has been made in the field of protein structure prediction and the contemporary methods are able to assemble correct topology for a large fraction of protein domains. But such approximate models are often not detailed enough for some important applications, including studies of reaction mechanisms, functional annotation, drug design or virtual ligand screening. The development of a method that could bring those structures closer to the native is then of great importance.
The minimal requirements for a potential that can refine protein structures is the existence of a correlation between the energy with native similarity and the scoring of the native structure as being lowest in energy. Extensive tests of the contemporary all-atom physics-based force fields were conducted to assess their applicability for refinement. The tests revealed flatness of such potentials and enabled the identification of the key problems in the current approaches. Guided by these results, the optimization of the AMBER (ff03) force field was performed that aimed at creating a funnel shape of the potential, with the native structure at the global minimum. Such shape should facilitate the conformational search during refinement and drive it towards the native conformation. Adjusting the relative weights of particular energy components, and adding an explicit hydrogen bond potential significantly improved the average correlation coefficient of the energy with native similarity (from 0.25 for the original ff03 potential to 0.65 for the optimized force field). The fraction of proteins for which the native structure had lowest energy increased from 0.22 to 0.90. The new, optimized potential was subsequently used to refine protein models of various native-similarity. The test employed 47 proteins and 100 decoy structures per protein. When the lowest energy structure from each trajectory was compared with the starting decoy, we observed structural improvement for 70% of the models on average. Such an unprecedented result of a systematic refinement is extremely promising in the context of high-resolution structure prediction.
|
2 |
Policy Explanation and Model Refinement in Decision-Theoretic PlanningKhan, Omar Zia January 2013 (has links)
Decision-theoretic systems, such as Markov Decision Processes (MDPs), are used for sequential decision-making under uncertainty. MDPs provide a generic framework that can be applied in various domains to compute optimal policies. This thesis presents techniques that offer explanations of optimal policies for MDPs and then refine decision theoretic models (Bayesian networks and MDPs) based on feedback from experts.
Explaining policies for sequential decision-making problems is difficult due to the presence of stochastic effects, multiple possibly competing objectives and long-range effects of actions. However, explanations are needed to assist experts in validating that the policy is correct and to help users in developing trust in the choices recommended by the policy. A set of domain-independent templates to justify a policy recommendation is presented along with a process to identify the minimum possible number of templates that need to be populated to completely justify the policy.
The rejection of an explanation by a domain expert indicates a deficiency in the model which led to the generation of the rejected policy. Techniques to refine the model parameters such that the optimal policy calculated using the refined parameters would conform with the expert feedback are presented in this thesis. The expert feedback is translated into constraints on the model parameters that are used during refinement. These constraints are non-convex for both Bayesian networks and MDPs. For Bayesian networks, the refinement approach is based on Gibbs sampling and stochastic hill climbing, and it learns a model that obeys expert constraints. For MDPs, the parameter space is partitioned such that alternating linear optimization can be applied to learn model parameters that lead to a policy in accordance with expert feedback.
In practice, the state space of MDPs can often be very large, which can be an issue for real-world problems. Factored MDPs are often used to deal with this issue. In Factored MDPs, state variables represent the state space and dynamic Bayesian networks model the transition functions. This helps to avoid the exponential growth in the state space associated with large and complex problems. The approaches for explanation and refinement presented in this thesis are also extended for the factored case to demonstrate their use in real-world applications. The domains of course advising to undergraduate students, assisted hand-washing for people with dementia and diagnostics for manufacturing are used to present empirical evaluations.
|
3 |
Policy Explanation and Model Refinement in Decision-Theoretic PlanningKhan, Omar Zia January 2013 (has links)
Decision-theoretic systems, such as Markov Decision Processes (MDPs), are used for sequential decision-making under uncertainty. MDPs provide a generic framework that can be applied in various domains to compute optimal policies. This thesis presents techniques that offer explanations of optimal policies for MDPs and then refine decision theoretic models (Bayesian networks and MDPs) based on feedback from experts.
Explaining policies for sequential decision-making problems is difficult due to the presence of stochastic effects, multiple possibly competing objectives and long-range effects of actions. However, explanations are needed to assist experts in validating that the policy is correct and to help users in developing trust in the choices recommended by the policy. A set of domain-independent templates to justify a policy recommendation is presented along with a process to identify the minimum possible number of templates that need to be populated to completely justify the policy.
The rejection of an explanation by a domain expert indicates a deficiency in the model which led to the generation of the rejected policy. Techniques to refine the model parameters such that the optimal policy calculated using the refined parameters would conform with the expert feedback are presented in this thesis. The expert feedback is translated into constraints on the model parameters that are used during refinement. These constraints are non-convex for both Bayesian networks and MDPs. For Bayesian networks, the refinement approach is based on Gibbs sampling and stochastic hill climbing, and it learns a model that obeys expert constraints. For MDPs, the parameter space is partitioned such that alternating linear optimization can be applied to learn model parameters that lead to a policy in accordance with expert feedback.
In practice, the state space of MDPs can often be very large, which can be an issue for real-world problems. Factored MDPs are often used to deal with this issue. In Factored MDPs, state variables represent the state space and dynamic Bayesian networks model the transition functions. This helps to avoid the exponential growth in the state space associated with large and complex problems. The approaches for explanation and refinement presented in this thesis are also extended for the factored case to demonstrate their use in real-world applications. The domains of course advising to undergraduate students, assisted hand-washing for people with dementia and diagnostics for manufacturing are used to present empirical evaluations.
|
Page generated in 0.0752 seconds