• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1303
  • 700
  • 234
  • 111
  • 97
  • 43
  • 36
  • 18
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 3138
  • 581
  • 547
  • 366
  • 355
  • 298
  • 295
  • 293
  • 237
  • 220
  • 213
  • 208
  • 191
  • 186
  • 178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Clause Learning, Resolution Space, and Pebbling

Hertel, Philipp 19 January 2009 (has links)
Currently, the most effective complete SAT solvers are based on the DPLL algorithm augmented by Clause Learning. These solvers can handle many real-world problems from application areas like verification, diagnosis, planning, and design. Clause Learning works by storing previously computed, intermediate results and then reusing them to prune the future search tree. Without Clause Learning, however, DPLL loses most of its effectiveness on real world problems. Recently there has been some work on obtaining a deeper understanding of the technique of Clause Learning. In this thesis, we contribute to the understanding of Clause Learning, and the Resolution proof system that underlies it, in a number of ways. We characterize Clause Learning as a new, intuitive Resolution refinement which we call CL. We then show that CL proofs can effectively p-simulate general Resolution. Furthermore, this result holds even for the more restrictive class of greedy, unit propagating CL proofs, which more accurately characterize Clause Learning as it is used in practice. This result is surprising and indicates that Clause Learning is significantly more powerful than was previously known. Since Clause Learning makes use of previously derived clauses, it motivates the study of Resolution space. We contribute to this area of study by proving that determining the variable space of a Resolution derivation is PSPACE-complete. The reduction also yields a surprising exponential size/space trade-off for Resolution in which an increase of just 3 units of variable space results in an exponential decrease in proofsize. This result runs counter to the intuitions of many in the SAT-solving community who have generally believed that proof-size should decrease smoothly as available space increases. In order to prove these Resolution results, we need to make use of some intuition regarding the relationship between Black-White Pebbling and Resolution. In fact, determining the complexity of Resolution variable space required us to first prove that Black-White Pebbling is PSPACE-complete. The complexity of the Black-White Pebbling Game has remained an open problem for 30 years and resisted numerous attempts to solve it. Its solution is the primary contribution of this thesis.
152

A Task-Centered Visualization Design Environment and a Method for Measuring the Complexity of Visualization Designs

Suo, Xiaoyuan 17 July 2009 (has links)
Recent years have seen a growing interest in the emerging area of computer security visualization which is about developing visualization methods to help solve computer security problems. In this thesis, we will first present a method for measuring the complexity of information visualization designs. The complexity is measured in terms of visual integration, number of separable dimensions for each visual unit, the complexity of interpreting the visual attributes, number of visual units, and the efficiency of visual search. This method is designed to better assist fellow developers to quickly evaluate multiple design choices, potentially enables computer to automatically measure the complexity of visualization data. We will also analyze the design space of network security visualization. Our main contribution is a new taxonomy that consists of three dimensions – data, visualizations, and tasks. Each dimension is further divided into hierarchical layers, and for each layer we have identified key parameters for making major design choices. This new taxonomy provides a comprehensive framework that can guide network security visualization developers to systematically explore the design space and make informed design decisions. It can also help developers or users systematically evaluate existing network security visualization techniques and systems. Finally it helps developers identify gaps in the design space and create new techniques. Taxonomy showed that most of the existing computer security visualization programs are data centered. However, some studies have shown that task centered visualization is perhaps more effective. To test this hypothesis, we propose a task centered visualization design framework, in which tasks are explicitly identified and organized and visualizations are constructed for specific tasks and their related data parameters. The center piece of this framework is a task tree which dynamically links the raw data with automatically generated visualization. The task tree serves as a high level interaction technique that allows users to conduct problem solving naturally at the task level, while still giving end users flexible control over the visualization construction. This work is currently being extended by building a prototype visualization system based on a Task-centered Visualization Design Architecture.
153

The Sequence and Function Relationship of Elastin: How Repetitive Sequences can Influence the Physical Properties of Elastin

He, David 09 January 2012 (has links)
Elastin is an essential extracellular protein that is a key component of elastic fibres, providing elasticity to cardiac, dermal, and arterial tissues. During the development of the human cardiovascular system, elastin self-assembles before being integrated into fibres, undergoing no significant turnover during the human lifetime. Abnormalities in elastin can adversely affect its self-assembly, and may lead to malformed elastic fibres. Due to the longevity required of these fibres, even minor abnormalities may have a large cumulative effect over the course of a lifetime, leading to late-onset vascular diseases. This thesis project has identified important, over-represented repetitive elements in elastin which are believed to be important for the self-assembly and elastomeric properties of elastin. Initial studies of single nucleotide polymorphisms (SNPs) from the HapMap project and dbSNP resulted in a set of genetic variation sites in the elastin gene. Based on these studies, glycine to serine and lysine to arginine substitutions were introduced in elastin-like polypeptides. The self-assembly properties of the resulting elastin-like polypeptides were observed under microscope and measured using absorbance at 440nm. Assembled polypeptides were also cross-linked to form thin membranes whose mechanical and physical properties were measured and compared. These mutations resulted in markedly different behavior than wild-type elastin-like proteins, suggesting that mutations in the repetitive elements of the elastin sequence can lead to adverse changes in the physical and functional properties of the resulting protein. Using next-generation sequencing, patients with thoracic aortic aneurysms are being genotyped to discover polymorphisms which may adversely affect the self-assembly properties of elastin, providing a link between genetic variation in elastin and cardiovascular disease.
154

Clause Learning, Resolution Space, and Pebbling

Hertel, Philipp 19 January 2009 (has links)
Currently, the most effective complete SAT solvers are based on the DPLL algorithm augmented by Clause Learning. These solvers can handle many real-world problems from application areas like verification, diagnosis, planning, and design. Clause Learning works by storing previously computed, intermediate results and then reusing them to prune the future search tree. Without Clause Learning, however, DPLL loses most of its effectiveness on real world problems. Recently there has been some work on obtaining a deeper understanding of the technique of Clause Learning. In this thesis, we contribute to the understanding of Clause Learning, and the Resolution proof system that underlies it, in a number of ways. We characterize Clause Learning as a new, intuitive Resolution refinement which we call CL. We then show that CL proofs can effectively p-simulate general Resolution. Furthermore, this result holds even for the more restrictive class of greedy, unit propagating CL proofs, which more accurately characterize Clause Learning as it is used in practice. This result is surprising and indicates that Clause Learning is significantly more powerful than was previously known. Since Clause Learning makes use of previously derived clauses, it motivates the study of Resolution space. We contribute to this area of study by proving that determining the variable space of a Resolution derivation is PSPACE-complete. The reduction also yields a surprising exponential size/space trade-off for Resolution in which an increase of just 3 units of variable space results in an exponential decrease in proofsize. This result runs counter to the intuitions of many in the SAT-solving community who have generally believed that proof-size should decrease smoothly as available space increases. In order to prove these Resolution results, we need to make use of some intuition regarding the relationship between Black-White Pebbling and Resolution. In fact, determining the complexity of Resolution variable space required us to first prove that Black-White Pebbling is PSPACE-complete. The complexity of the Black-White Pebbling Game has remained an open problem for 30 years and resisted numerous attempts to solve it. Its solution is the primary contribution of this thesis.
155

The Sequence and Function Relationship of Elastin: How Repetitive Sequences can Influence the Physical Properties of Elastin

He, David 09 January 2012 (has links)
Elastin is an essential extracellular protein that is a key component of elastic fibres, providing elasticity to cardiac, dermal, and arterial tissues. During the development of the human cardiovascular system, elastin self-assembles before being integrated into fibres, undergoing no significant turnover during the human lifetime. Abnormalities in elastin can adversely affect its self-assembly, and may lead to malformed elastic fibres. Due to the longevity required of these fibres, even minor abnormalities may have a large cumulative effect over the course of a lifetime, leading to late-onset vascular diseases. This thesis project has identified important, over-represented repetitive elements in elastin which are believed to be important for the self-assembly and elastomeric properties of elastin. Initial studies of single nucleotide polymorphisms (SNPs) from the HapMap project and dbSNP resulted in a set of genetic variation sites in the elastin gene. Based on these studies, glycine to serine and lysine to arginine substitutions were introduced in elastin-like polypeptides. The self-assembly properties of the resulting elastin-like polypeptides were observed under microscope and measured using absorbance at 440nm. Assembled polypeptides were also cross-linked to form thin membranes whose mechanical and physical properties were measured and compared. These mutations resulted in markedly different behavior than wild-type elastin-like proteins, suggesting that mutations in the repetitive elements of the elastin sequence can lead to adverse changes in the physical and functional properties of the resulting protein. Using next-generation sequencing, patients with thoracic aortic aneurysms are being genotyped to discover polymorphisms which may adversely affect the self-assembly properties of elastin, providing a link between genetic variation in elastin and cardiovascular disease.
156

Malaria in the Amazon: An Agent-Based Approach to Epidemiological Modeling of Coupled Systems

King, Joshua Michael Lloyd 17 August 2009 (has links)
The epidemiology of malaria considers a complex set of local interactions amongst host, vector, and environment. A history of reemergence, epidemic transition, and ensuing endemic transmission in Iquitos, Peru reveals an interesting case used to model and explore such interactions. In this region of the Peruvian Amazon, climate change, development initiatives and landscape fragmentation are amongst a unique set of local spatial variables underlying the endemicity of malaria. Traditional population-based approaches lack the ability to resolve the spatial influences of these variables. Presented is a framework for spatially explicit, agent-based modeling of malaria transmission dynamics in Iquitos and surrounding areas. The use of an agent-based model presents a new opportunity to spatially define causal factors and influences of transmission between mosquito vectors and human hosts. In addition to spatial considerations, the ability to model individual decisions of humans can define socio-economic and human-environment interactions related to malaria transmission. Three interacting sub-models representing human decisions, vector dynamics, and environmental factors comprise the model. Feedbacks between the interacting sub-models define individual decisions and ultimately the flexibility that will allow the model to function in a diagnostic capacity. Sensitivity analysis and simulated interactions are used to discuss this diagnostic capability and to build understanding of the physical systems driving local transmission of malaria.
157

Computational Distinguishability of Quantum Channels

Rosgen, William January 2009 (has links)
The computational problem of distinguishing two quantum channels is central to quantum computing. It is a generalization of the well-known satisfiability problem from classical to quantum computation. This problem is shown to be surprisingly hard: it is complete for the class QIP of problems that have quantum interactive proof systems, which implies that it is hard for the class PSPACE of problems solvable by a classical computation in polynomial space. Several restrictions of distinguishability are also shown to be hard. It is no easier when restricted to quantum computations of logarithmic depth, to mixed-unitary channels, to degradable channels, or to antidegradable channels. These hardness results are demonstrated by finding reductions between these classes of quantum channels. These techniques have applications outside the distinguishability problem, as the construction for mixed-unitary channels is used to prove that the additivity problem for the classical capacity of quantum channels can be equivalently restricted to the mixed unitary channels.
158

Malaria in the Amazon: An Agent-Based Approach to Epidemiological Modeling of Coupled Systems

King, Joshua Michael Lloyd 17 August 2009 (has links)
The epidemiology of malaria considers a complex set of local interactions amongst host, vector, and environment. A history of reemergence, epidemic transition, and ensuing endemic transmission in Iquitos, Peru reveals an interesting case used to model and explore such interactions. In this region of the Peruvian Amazon, climate change, development initiatives and landscape fragmentation are amongst a unique set of local spatial variables underlying the endemicity of malaria. Traditional population-based approaches lack the ability to resolve the spatial influences of these variables. Presented is a framework for spatially explicit, agent-based modeling of malaria transmission dynamics in Iquitos and surrounding areas. The use of an agent-based model presents a new opportunity to spatially define causal factors and influences of transmission between mosquito vectors and human hosts. In addition to spatial considerations, the ability to model individual decisions of humans can define socio-economic and human-environment interactions related to malaria transmission. Three interacting sub-models representing human decisions, vector dynamics, and environmental factors comprise the model. Feedbacks between the interacting sub-models define individual decisions and ultimately the flexibility that will allow the model to function in a diagnostic capacity. Sensitivity analysis and simulated interactions are used to discuss this diagnostic capability and to build understanding of the physical systems driving local transmission of malaria.
159

Computational Distinguishability of Quantum Channels

Rosgen, William January 2009 (has links)
The computational problem of distinguishing two quantum channels is central to quantum computing. It is a generalization of the well-known satisfiability problem from classical to quantum computation. This problem is shown to be surprisingly hard: it is complete for the class QIP of problems that have quantum interactive proof systems, which implies that it is hard for the class PSPACE of problems solvable by a classical computation in polynomial space. Several restrictions of distinguishability are also shown to be hard. It is no easier when restricted to quantum computations of logarithmic depth, to mixed-unitary channels, to degradable channels, or to antidegradable channels. These hardness results are demonstrated by finding reductions between these classes of quantum channels. These techniques have applications outside the distinguishability problem, as the construction for mixed-unitary channels is used to prove that the additivity problem for the classical capacity of quantum channels can be equivalently restricted to the mixed unitary channels.
160

Documenting & Using Cognitive Complexity Mitigation Strategies (CCMS) to Improve the Efficiency of Cross-Context User Transfers

Bhagat, Rahul January 2011 (has links)
Cognitive complexity mitigation strategies are methods and approaches utilized by users to reduce the apparent complexity of problems thus making them easier to solve. These strategies often effective because they mitigate the limitations of human working memory and attention resources. Such cognitive complexity mitigation strategies are used throughout the design, development and operational processes of complex systems. Thus, a better understanding of these strategies, and methods that leverage them, can help improve the efficiency of such processes. Additionally, changes in the use of these strategies across various environments can identify cognitive differences in operating and developing across these contexts. This knowledge can help improve the effectiveness of cross-context user transfers by suggesting change management processes that incorporate the degree of cognitive difference across contexts. In order to document cognitive complexity mitigation strategies and the change in their usage, two application domains are studied. Firstly, cognitive complexity mitigation strategies used by designers during the engineering design process are found through an ethnographic immersion with a participating engineering firm, followed by an analysis of the designer's logbooks and validation interviews with the designers. Results include identification of five strategies used by the designers to mitigate design complexity. These strategies include Blackbox Modeling, Whitebox Modeling, Decomposition, Visualization and Prioritized Lists. The five complexity mitigation strategies are probed further across a larger sample of engineering designers and the usage frequency of these strategies is assessed across commonly performed engineering design activities which include the Selection, Configuration and Parametric activities. The results indicate the preferred use of certain strategies based on the engineering activity being performed. Such preferential usage of complexity mitigation strategies is also assessed with regards to Original and Redesign projects types. However, there is no indication of biased strategy usage across these two project characterizations. These results are an example of a usage-frequency based difference analysis; such analyses help identify the strategies that experience increased or reduced usage when transferring across activities. In contrast to the first application domain, which captures changes in how often strategies are used across contexts, the second application domain is a method of assessing differences based on how a specific strategy is used differently across contexts. This alternative method is developed through a project that aims to optimize the transfer of air traffic controllers across different airspace sectors. The method uses a previously researched complexity mitigation strategy, knows as a structure based abstraction, to develop a difference analysis tool called the Sector Abstraction Binder. This tool is used to perform cognitive difference analyses between air traffic control sectors by leveraging characteristic variations in how structure based abstractions are applied across different sectors. This Sector Abstraction Binder is applied to two high-level airspace sectors to demonstrate the utility of such a method.

Page generated in 0.1478 seconds