• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 88
  • 50
  • 26
  • 14
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 526
  • 144
  • 97
  • 74
  • 64
  • 64
  • 60
  • 59
  • 53
  • 49
  • 46
  • 45
  • 43
  • 41
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Neural-Symbolic Integration / Neuro-Symbolische Integration

Bader, Sebastian 15 December 2009 (has links) (PDF)
In this thesis, we discuss different techniques to bridge the gap between two different approaches to artificial intelligence: the symbolic and the connectionist paradigm. Both approaches have quite contrasting advantages and disadvantages. Research in the area of neural-symbolic integration aims at bridging the gap between them. Starting from a human readable logic program, we construct connectionist systems, which behave equivalently. Afterwards, those systems can be trained, and later the refined knowledge be extracted.
362

Coupled flow systems, adjoint techniques and uncertainty quantification

Garg, Vikram Vinod, 1985- 25 October 2012 (has links)
Coupled systems are ubiquitous in modern engineering and science. Such systems can encompass fluid dynamics, structural mechanics, chemical species transport and electrostatic effects among other components, all of which can be coupled in many different ways. In addition, such models are usually multiscale, making their numerical simulation challenging, and necessitating the use of adaptive modeling techniques. The multiscale, multiphysics models of electrosomotic flow (EOF) constitute a particularly challenging coupled flow system. A special feature of such models is that the coupling between the electric physics and hydrodynamics is via the boundary. Numerical simulations of coupled systems are typically targeted towards specific Quantities of Interest (QoIs). Adjoint-based approaches offer the possibility of QoI targeted adaptive mesh refinement and efficient parameter sensitivity analysis. The formulation of appropriate adjoint problems for EOF models is particularly challenging, due to the coupling of physics via the boundary as opposed to the interior of the domain. The well-posedness of the adjoint problem for such models is also non-trivial. One contribution of this dissertation is the derivation of an appropriate adjoint problem for slip EOF models, and the development of penalty-based, adjoint-consistent variational formulations of these models. We demonstrate the use of these formulations in the simulation of EOF flows in straight and T-shaped microchannels, in conjunction with goal-oriented mesh refinement and adjoint sensitivity analysis. Complex computational models may exhibit uncertain behavior due to various reasons, ranging from uncertainty in experimentally measured model parameters to imperfections in device geometry. The last decade has seen a growing interest in the field of Uncertainty Quantification (UQ), which seeks to determine the effect of input uncertainties on the system QoIs. Monte Carlo methods remain a popular computational approach for UQ due to their ease of use and "embarassingly parallel" nature. However, a major drawback of such methods is their slow convergence rate. The second contribution of this work is the introduction of a new Monte Carlo method which utilizes local sensitivity information to build accurate surrogate models. This new method, called the Local Sensitivity Derivative Enhanced Monte Carlo (LSDEMC) method can converge at a faster rate than plain Monte Carlo, especially for problems with a low to moderate number of uncertain parameters. Adjoint-based sensitivity analysis methods enable the computation of sensitivity derivatives at virtually no extra cost after the forward solve. Thus, the LSDEMC method, in conjuction with adjoint sensitivity derivative techniques can offer a robust and efficient alternative for UQ of complex systems. The efficiency of Monte Carlo methods can be further enhanced by using stratified sampling schemes such as Latin Hypercube Sampling (LHS). However, the non-incremental nature of LHS has been identified as one of the main obstacles in its application to certain classes of complex physical systems. Current incremental LHS strategies restrict the user to at least doubling the size of an existing LHS set to retain the convergence properties of LHS. The third contribution of this research is the development of a new Hierachical LHS algorithm, that creates designs which can be used to perform LHS studies in a more flexibly incremental setting, taking a step towards adaptive LHS methods. / text
363

Optimization of force fields for molecular dynamics

Di Pierro, Michele 09 February 2015 (has links)
A technology for optimization of potential parameters from condensed phase simulations (POP) is discussed and illustrated. It is based on direct calculations of the derivatives of macroscopic observables with respect to the potential parameters. The derivatives are used in a local minimization scheme, comparing simulated and experimental data. In particular, we show that the Newton Trust-Region protocol allows for accurate and robust optimization. POP is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. We then apply the newly developed technology to study the liquid mixture of tert-butanol and water. We are able to obtain, after 4 iterations, the correct phase behavior and accurately predict the value of the Kirkwood Buff (KB) integrals. We further illustrate that a potential that is determined solely by KB information, or the pair correlation function, is not necessarily unique. / text
364

A goal-directed and policy-based approach to system management

Campbell, Gavin A. January 2008 (has links)
This thesis presents a domain-independent approach to dynamic system management using goals and policies. A goal is a general, high-level aim a system must continually work toward achieving. A policy is a statement of how a system should behave for a given set of detectable events and conditions. Combined, goals may be realised through the selection and execution of policies that contribute to their aims. In this manner, a system may be managed using a goal-directed, policy-based approach. The approach is a collection of related techniques and tools: a policy language and policy system, goal definition and refinement via policy selection, and conflict filtering among policies. Central to these themes, ontologies are used to model application domains, and incorporate domain knowledge within the system. The ACCENT policy system (Advanced Component Control Enhancing Network Technologies, http://www.cs.stir.ac.uk/accent) is used as a base for the approach, while goals and policies are defined using an extension of APPEL (Adaptable and Programmable Policy Environment and Language, http://www.cs.stir.ac.uk/appel). The approach differs from existing work in that it reduces system state, goals and policies to a numerical rather than logical form. This is more user-friendly as the goal domain may be expressed without any knowledge of formal methods. All developed techniques and tools are entirely domain-independent, allowing for reuse with other event-driven systems. The ability to express a system aim as a goal provides more powerful and proactive high-level management than was previously possible using policies alone. The approach is demonstrated and evaluated within this thesis for the domains of Internet telephony and sensor network/wind turbine management.
365

Αυτόματη επιλογή σημασιολογικά συγγενών όρων για την επαναδιατύπωση των ερωτημάτων σε μηχανές αναζήτησης πληροφορίας / Automatic selection of semantic related terms for reformulating a query into a search engine

Κοζανίδης, Ελευθέριος 14 September 2007 (has links)
Η βελτίωση ερωτημάτων (Query refinement) είναι η διαδικασία πρότασης εναλλακτικών όρων στους χρήστες των μηχανών αναζήτησης του Διαδικτύου για την διατύπωση της πληροφοριακής τους ανάγκης. Παρόλο που εναλλακτικοί σχηματισμοί ερωτημάτων μπορούν να συνεισφέρουν στην βελτίωση των ανακτηθέντων αποτελεσμάτων, η χρησιμοποίησή τους από χρήστες του Διαδικτύου είναι ιδιαίτερα περιορισμένη καθώς οι όροι των βελτιωμένων ερωτημάτων δεν περιέχουν σχεδόν καθόλου πληροφορία αναφορικά με τον βαθμό ομοιότητάς τους με τους όρους του αρχικού ερωτήματος, ενώ συγχρόνως δεν καταδεικνύουν το βαθμό συσχέτισής τους με τα πληροφοριακά ενδιαφέροντα των χρηστών. Παραδοσιακά, οι εναλλακτικοί σχηματισμοί ερωτημάτων καθορίζονται κατ’ αποκλειστικότητα από τη σημασιολογική σχέση που επιδεικνύουν οι συμπληρωματικοί όροι με τους αρχικούς όρους του ερωτήματος, χωρίς να λαμβάνουν υπόψη τον επιδιωκόμενο στόχο της αναζήτησης που υπολανθάνει πίσω από ένα ερώτημα του χρήστη. Στην παρούσα εργασία θα παρουσιάσουμε μια πρότυπη τεχνική βελτίωσης ερωτημάτων η οποία χρησιμοποιεί μια λεξική οντολογία προκειμένου να εντοπίσει εναλλακτικούς σχηματισμούς ερωτημάτων οι οποίοι αφενός, θα περιγράφουν το αντικείμενο της αναζήτησης του χρήστη και αφετέρου θα σχετίζονται με τα ερωτήματα που υπέβαλε ο χρήστης. Το πιο πρωτοποριακό χαρακτηριστικό της τεχνικής μας είναι η οπτική αναπαράσταση του εναλλακτικού ερωτήματος με την μορφή ενός ιεραρχικά δομημένου γράφου. Η αναπαράσταση αυτή παρέχει σαφείς πληροφορίες για την σημασιολογική σχέση μεταξύ των όρων του βελτιωμένου ερωτήματος και των όρων που χρησιμοποίησε ο χρήστης για να εκφράσει την πληροφοριακή του ανάγκη ενώ παράλληλα παρέχει την δυνατότητα στον χρήστη να επιλέξει ποιοι από τους υποψήφιους όρους θα συμμετέχουν τελικά στην διαδικασία βελτιστοποίησης δημιουργώντας διαδραστικά το νέο ερώτημα. Τα αποτελέσματα των πειραμάτων που διενεργήσαμε για να αξιολογήσουμε την απόδοση της τεχνικής μας, είναι ιδιαίτερα ικανοποιητικά και μας οδηγούν στο συμπέρασμα ότι η μέθοδός μας μπορεί να βοηθήσει σημαντικά στη διευκόλυνση του χρήστη κατά τη διαδικασία επιλογής ερωτημάτων για την ανάκτηση πληροφορίας από τα δεδομένα του Παγκόσμιου Ιστού. / Query refinement is the process of providing Web information seekers with alternative wordings for expressing their information needs. Although alternative query formulations may contribute to the improvement of retrieval results, nevertheless their realization by Web users is intrinsically limited in that alternative query wordings do not convey explicit information about neither their degree nor their type of correlation to the user-issued queries. Moreover, alternative query formulations are determined based on the semantics of the issued query alone and they do not consider anything about the search intentions of the user issuing that query. In this paper, we introduce a novel query refinement technique which uses a lexical ontology for identifying alternative query formulations that are both informative of the user’s interests and related to the user selected queries. The most innovative feature of our technique is the visualization of the alternative query wordings in a graphical representation form, which conveys explicit information about the refined queries correlation to the user issued requests and which allows the user select which terms to participate in the refinement process. Experimental results demonstrate that our method has a significant potential in improving the user search experience.
366

Feature selection based segmentation of multi-source images : application to brain tumor segmentation in multi-sequence MRI

Zhang, Nan 12 September 2011 (has links) (PDF)
Multi-spectral images have the advantage of providing complementary information to resolve some ambiguities. But, the challenge is how to make use of the multi-spectral images effectively. In this thesis, our study focuses on the fusion of multi-spectral images by extracting the most useful features to obtain the best segmentation with the least cost in time. The Support Vector Machine (SVM) classification integrated with a selection of the features in a kernel space is proposed. The selection criterion is defined by the kernel class separability. Based on this SVM classification, a framework to follow up brain tumor evolution is proposed, which consists of the following steps: to learn the brain tumors and select the features from the first magnetic resonance imaging (MRI) examination of the patients; to automatically segment the tumor in new data using a multi-kernel SVM based classification; to refine the tumor contour by a region growing technique; and to possibly carry out an adaptive training. The proposed system was tested on 13 patients with 24 examinations, including 72 MRI sequences and 1728 images. Compared with the manual traces of the doctors as the ground truth, the average classification accuracy reaches 98.9%. The system utilizes several novel feature selection methods to test the integration of feature selection and SVM classifiers. Also compared with the traditional SVM, Fuzzy C-means, the neural network and an improved level set method, the segmentation results and quantitative data analysis demonstrate the effectiveness of our proposed system.
367

Policy Explanation and Model Refinement in Decision-Theoretic Planning

Khan, Omar Zia January 2013 (has links)
Decision-theoretic systems, such as Markov Decision Processes (MDPs), are used for sequential decision-making under uncertainty. MDPs provide a generic framework that can be applied in various domains to compute optimal policies. This thesis presents techniques that offer explanations of optimal policies for MDPs and then refine decision theoretic models (Bayesian networks and MDPs) based on feedback from experts. Explaining policies for sequential decision-making problems is difficult due to the presence of stochastic effects, multiple possibly competing objectives and long-range effects of actions. However, explanations are needed to assist experts in validating that the policy is correct and to help users in developing trust in the choices recommended by the policy. A set of domain-independent templates to justify a policy recommendation is presented along with a process to identify the minimum possible number of templates that need to be populated to completely justify the policy. The rejection of an explanation by a domain expert indicates a deficiency in the model which led to the generation of the rejected policy. Techniques to refine the model parameters such that the optimal policy calculated using the refined parameters would conform with the expert feedback are presented in this thesis. The expert feedback is translated into constraints on the model parameters that are used during refinement. These constraints are non-convex for both Bayesian networks and MDPs. For Bayesian networks, the refinement approach is based on Gibbs sampling and stochastic hill climbing, and it learns a model that obeys expert constraints. For MDPs, the parameter space is partitioned such that alternating linear optimization can be applied to learn model parameters that lead to a policy in accordance with expert feedback. In practice, the state space of MDPs can often be very large, which can be an issue for real-world problems. Factored MDPs are often used to deal with this issue. In Factored MDPs, state variables represent the state space and dynamic Bayesian networks model the transition functions. This helps to avoid the exponential growth in the state space associated with large and complex problems. The approaches for explanation and refinement presented in this thesis are also extended for the factored case to demonstrate their use in real-world applications. The domains of course advising to undergraduate students, assisted hand-washing for people with dementia and diagnostics for manufacturing are used to present empirical evaluations.
368

Parallel Anisotropic Block-based Adaptive Mesh Refinement Algorithm For Three-dimensional Flows

Williamschen, Michael 11 December 2013 (has links)
A three-dimensional, parallel, anisotropic, block-based, adaptive mesh refinement (AMR) algorithm is proposed and described for the solution of fluid flows on body-fitted, multi-block, hexahedral meshes. Refinement and de-refinement in any grid block computational direction, or combination of directions, allows the mesh to rapidly adapt to anisotropic flow features such as shocks, boundary layers, or flame fronts, common to complex flow physics. Anisotropic refinements and an efficient and highly scalable parallel implementation lead to a potential for significant reduction in computational cost as compared to a more typical isotropic approach. Unstructured root-block topology allows for greater flexibility in the treatment of complex geometries. The AMR algorithm is coupled with an upwind finite-volume scheme for the solution of the Euler equations governing inviscid, compressible, gaseous flow. Steady-state and time varying, three-dimensional, flow problems are investigated for various geometries, including the cubed-sphere mesh.
369

Parallel Anisotropic Block-based Adaptive Mesh Refinement Algorithm For Three-dimensional Flows

Williamschen, Michael 11 December 2013 (has links)
A three-dimensional, parallel, anisotropic, block-based, adaptive mesh refinement (AMR) algorithm is proposed and described for the solution of fluid flows on body-fitted, multi-block, hexahedral meshes. Refinement and de-refinement in any grid block computational direction, or combination of directions, allows the mesh to rapidly adapt to anisotropic flow features such as shocks, boundary layers, or flame fronts, common to complex flow physics. Anisotropic refinements and an efficient and highly scalable parallel implementation lead to a potential for significant reduction in computational cost as compared to a more typical isotropic approach. Unstructured root-block topology allows for greater flexibility in the treatment of complex geometries. The AMR algorithm is coupled with an upwind finite-volume scheme for the solution of the Euler equations governing inviscid, compressible, gaseous flow. Steady-state and time varying, three-dimensional, flow problems are investigated for various geometries, including the cubed-sphere mesh.
370

Machine Learning Approaches to Refining Post-translational Modification Predictions and Protein Identifications from Tandem Mass Spectrometry

Chung, Clement 11 December 2012 (has links)
Tandem mass spectrometry (MS/MS) is the dominant approach for large-scale peptide sequencing in high-throughput proteomic profiling studies. The computational analysis of MS/MS spectra involves the identification of peptides from experimental spectra, especially those with post-translational modifications (PTMs), as well as the inference of protein composition based on the putative identified peptides. In this thesis, we tackled two major challenges associated with an MS/MS analysis: 1) the refinement of PTM predictions from MS/MS spectra and 2) the inference of protein composition based on peptide predictions. We proposed two PTM prediction refinement algorithms, PTMClust and its Bayesian nonparametric extension \emph{i}PTMClust, and a protein identification algorithm, pro-HAP, that is based on a novel two-layer hierarchical clustering approach that leverages prior knowledge about protein function. Individually, we show that our two PTM refinement algorithms outperform the state-of-the-art algorithms and our protein identification algorithm performs at par with the state of the art. Collectively, as a demonstration of our end-to-end MS/MS computational analysis of a human chromatin protein complex study, we show that our analysis pipeline can find high confidence putative novel protein complex members. Moreover, it can provide valuable insights into the formation and regulation of protein complexes by detailing the specificity of different PTMs for the members in each complex.

Page generated in 0.048 seconds