• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 759
  • 105
  • 69
  • 58
  • 24
  • 24
  • 16
  • 16
  • 16
  • 16
  • 16
  • 16
  • 14
  • 10
  • 7
  • Tagged with
  • 1393
  • 1393
  • 290
  • 200
  • 153
  • 149
  • 124
  • 122
  • 120
  • 119
  • 118
  • 115
  • 109
  • 107
  • 107
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
731

Autonomic Core Network Management System

Tizghadam, Ali 11 December 2009 (has links)
This thesis presents an approach to the design and management of core networks where the packet transport is the main service and the backbone should be able to respond to unforeseen changes in network parameters in order to provide smooth and reliable service for the customers. Inspired by Darwin's seminal work describing the long-term processes in life, and with the help of graph theoretic metrics, in particular the "random-walk betweenness", we assign a survival value, the network criticality, to a communication network to quantify its robustness. We show that the random-walk betweenness of a node (link) consists of the product of two terms, a global measure which is fixed for all the nodes (links) and a local graph measure which is in fact the weight of the node (link). The network criticality is defined as the global part of the betweenness of a node (link). We show that the network criticality is a monotone decreasing, and strictly convex function of the weight matrix of the network graph. We argue that any communication network can be modeled as a topology that evolves based on survivability and performance requirements. The evolution should be in the direction of decreasing the network criticality, which in turn increases the network robustness. We use network criticality as the main control parameter and we propose a network management system, AutoNet, to guide the network evolution in real time. AutoNet consists of two autonomic loops, the slow loop to control the long-term evolution of robustness throughout the whole network, and the fast loop to account for short-term performance and robustness issues. We investigate the dynamics of network criticality and we develop a convex optimization problem to minimize the network criticality. We propose a network design procedure based on the optimization problem which can be used to develop the long-term autonomic loop for AutoNet. Furthermore, we use the properties of the duality gap of the optimization problem to develop traffic engineering methods to manage the transport of packets in a network. This provides for the short-term autonomic loop of AutoNet architecture. Network criticality can also be used to rank alternative networks based on their robustness to the unpredicted changes in network conditions. This can help find the best network structure under some pre-specified constraint to deal with robustness issues.
732

Machine Learning and Graph Theory Approaches for Classification and Prediction of Protein Structure

Altun, Gulsah 22 April 2008 (has links)
Recently, many methods have been proposed for the classification and prediction problems in bioinformatics. One of these problems is the protein structure prediction. Machine learning approaches and new algorithms have been proposed to solve this problem. Among the machine learning approaches, Support Vector Machines (SVM) have attracted a lot of attention due to their high prediction accuracy. Since protein data consists of sequence and structural information, another most widely used approach for modeling this structured data is to use graphs. In computer science, graph theory has been widely studied; however it has only been recently applied to bioinformatics. In this work, we introduced new algorithms based on statistical methods, graph theory concepts and machine learning for the protein structure prediction problem. A new statistical method based on z-scores has been introduced for seed selection in proteins. A new method based on finding common cliques in protein data for feature selection is also introduced, which reduces noise in the data. We also introduced new binary classifiers for the prediction of structural transitions in proteins. These new binary classifiers achieve much higher accuracy results than the current traditional binary classifiers.
733

Predicting Protein Calcium Binding Sites

Wang, Xue 01 December 2009 (has links)
Calcium is one of the closely relevant metal ions that involves in enormous physicochemical activities in human body, including cell division and apoptosis, muscle contraction, neurotransmitter release, enzyme activation, and blood-clotting. Calcium fulfills its functions through the binding to different classes of calcium-binding proteins. To facilitate our understanding of the roles of calcium in biological systems, and to design novel metal-binding proteins with tailored binding capabilities, it is important to develop computation algorithms to predict calcium-binding sites in different classes of proteins. In literature, calcium-binding sites may be represented by either a spacial point, or the set of residues chelating calcium ion. A thorough statistical analysis of known calcium-binding proteins deposited in Protein Data Bank gives reference values of various parameters characterizing geometric and chemical features in calcium-binding sites including distances, angles, dihedral angles, the Hull property, coordination numbers, ligand types and formal charges. It also reveals clear differences between the well-known EF-hand calcium-binding motif and other calcium-binding motifs. Utilizing the above multiple geometric and chemical parameters in well-formed calcium binding sites, we developed MUG (MUltiple Geometries) program. MUG can re-identify the coordinates of the documented calcium ion and the set of ligand residues. Three previously published data sets were tested. They are comprised of, respectively, 19, 44 and 54 holo protein structures with 48, 92 and 91 documented calcium-binding sites. Defining a "correct hit" as a point within 3.5 angstrom to the documented calcium location, MUG has a sensitivity around 90% and selectivity around 80%. The set of ligand residues (calcium-binding pockets) were identified for 43, 66 and 63 documented calcium ion in these three data set respectively. In order to achieve true prediction, our program was then enhanced to predict calcium-binding pockets in apo (calcium-free) proteins. Our new program MUGSR accounts for the conformational changes involved in calcium-binding pockets before and after the binding of calcium ions. It is able to capture calcium binding pockets that may undergo local conformational changes or side chain torsional rotations, which is validated by referring back to the corresponding holo protein structure sharing more than 98% sequence similarity with the apo protein.
734

Modeling the Power Distribution Network of a Virtual City and Studying the Impact of Fire on the Electrical Infrastructure

Bagchi, Arijit 12 March 2013 (has links)
The smooth and reliable operation of key infrastructure components like water distribution systems, electric power systems, and telecommunications is essential for a nation?s economic growth and overall security. Tragic events such as the Northridge earthquake and Hurricane Katrina have shown us how the occurrence of a disaster can cripple one or more such critical infrastructure components and cause widespread damage and destruction. Technological advancements made over the last few decades have resulted in these infrastructure components becoming highly complicated and inter-dependent on each other. The development of tools which can aid in understanding this complex interaction amongst the infrastructure components is thus of paramount importance for being able to manage critical resources and carry out post-emergency recovery missions. The research work conducted as a part of this thesis aims at studying the effects of fire (a calamitous event) on the electrical distribution network of a city. The study has been carried out on a test bed comprising of a virtual city named Micropolis which was modeled using a Geographic Information System (GIS) based software package. This report describes the designing of a separate electrical test bed using Simulink, based on the GIS layout of the power distribution network of Micropolis. It also proposes a method of quantifying the damage caused by fire to the electrical network by means of a parameter called the Load Loss Damage Index (LLDI). Finally, it presents an innovative graph theoretic approach for determining how to route power across faulted sections of the electrical network using a given set of Normally Open switches. The power is routed along a path of minimum impedance. The proposed methodologies are then tested by running numerous simulations on the Micropolis test bed, corresponding to different fire spread scenarios. The LLDI values generated from these simulation runs are then analyzed in order to determine the most damaging scenarios and to identify infrastructure components of the city which are most crucial in containing the damage caused by fire to the electrical network. The conclusions thereby drawn can give useful insights to emergency response personnel when they deal with real-life disasters.
735

On the Relation Between Quantum Computation and Classical Statistical Mechanics

Geraci, Joseph 20 January 2009 (has links)
We provide a quantum algorithm for the exact evaluation of the Potts partition function for a certain class of restricted instances of graphs that correspond to irreducible cyclic codes. We use the same approach to demonstrate that quantum computers can provide an exponential speed up over the best classical algorithms for the exact evaluation of the weight enumerator polynomial for a family of classical cyclic codes. In addition to this we also provide an efficient quantum approximation algorithm for a function (signed-Euler generating function) closely related to the Ising partition function and demonstrate that this problem is BQP-complete. We accomplish the above for the Potts partition function by using a series of links between Gauss sums, classical coding theory, graph theory and the partition function. We exploit the fact that there exists an efficient approximation algorithm for Gauss sums and the fact that this problem is equivalent in complexity to evaluating discrete log. A theorem of McEliece allows one to turn the Gauss sum approximation into an exact evaluation of the Potts partition function. Stripping the physics from this result leaves one with the result for the weight enumerator polynomial. The result for the approximation of the signed-Euler generating function was accomplished by fashioning a new mapping between quantum circuits and graphs. The mapping provided us with a way of relating the cycle structure of graphs with quantum circuits. Using a slight variant of this mapping, we present the final result of this thesis which presents a way of testing families of quantum circuits for their classical simulatability. We thus provide an efficient way of deciding whether a quantum circuit provides any additional computational power over classical computation and this is achieved by exploiting the fact that planar instances of the Ising partition function (with no external magnetic field) can be efficiently classically computed.
736

Autonomic Core Network Management System

Tizghadam, Ali 11 December 2009 (has links)
This thesis presents an approach to the design and management of core networks where the packet transport is the main service and the backbone should be able to respond to unforeseen changes in network parameters in order to provide smooth and reliable service for the customers. Inspired by Darwin's seminal work describing the long-term processes in life, and with the help of graph theoretic metrics, in particular the "random-walk betweenness", we assign a survival value, the network criticality, to a communication network to quantify its robustness. We show that the random-walk betweenness of a node (link) consists of the product of two terms, a global measure which is fixed for all the nodes (links) and a local graph measure which is in fact the weight of the node (link). The network criticality is defined as the global part of the betweenness of a node (link). We show that the network criticality is a monotone decreasing, and strictly convex function of the weight matrix of the network graph. We argue that any communication network can be modeled as a topology that evolves based on survivability and performance requirements. The evolution should be in the direction of decreasing the network criticality, which in turn increases the network robustness. We use network criticality as the main control parameter and we propose a network management system, AutoNet, to guide the network evolution in real time. AutoNet consists of two autonomic loops, the slow loop to control the long-term evolution of robustness throughout the whole network, and the fast loop to account for short-term performance and robustness issues. We investigate the dynamics of network criticality and we develop a convex optimization problem to minimize the network criticality. We propose a network design procedure based on the optimization problem which can be used to develop the long-term autonomic loop for AutoNet. Furthermore, we use the properties of the duality gap of the optimization problem to develop traffic engineering methods to manage the transport of packets in a network. This provides for the short-term autonomic loop of AutoNet architecture. Network criticality can also be used to rank alternative networks based on their robustness to the unpredicted changes in network conditions. This can help find the best network structure under some pre-specified constraint to deal with robustness issues.
737

Compact connectivity representation for triangle meshes

Gurung, Topraj 05 April 2013 (has links)
Many digital models used in entertainment, medical visualization, material science, architecture, Geographic Information Systems (GIS), and mechanical Computer Aided Design (CAD) are defined in terms of their boundaries. These boundaries are often approximated using triangle meshes. The complexity of models, which can be measured by triangle count, increases rapidly with the precision of scanning technologies and with the need for higher resolution. An increase in mesh complexity results in an increase of storage requirement, which in turn increases the frequency of disk access or cache misses during mesh processing, and hence decreases performance. For example, in a test application involving a mesh with 55 million triangles in a machine with 4GB of memory versus a machine with 1GB of memory, performance decreases by a factor of about 6000 because of memory thrashing. To help reduce memory thrashing, we focus on decreasing the average storage requirement per triangle measured in 32-bit integer references per triangle (rpt). This thesis covers compact connectivity representation for triangle meshes and discusses four data structures: 1. Sorted Opposite Table (SOT), which uses 3 rpt and has been extended to support tetrahedral meshes. 2. Sorted Quad (SQuad), which uses about 2 rpt and has been extended to support streaming. 3. Laced Ring (LR), which uses about 1 rpt and offers an excellent compromise between storage compactness and performance of mesh traversal operators. 4. Zipper, an extension of LR, which uses about 6 bits per triangle (equivalently 0.19 rpt), therefore is the most compact representation. The triangle mesh data structures proposed in this thesis support the standard set of mesh connectivity operators introduced by the previously proposed Corner Table at an amortized constant time complexity. They can be constructed in linear time and space from the Corner Table or any equivalent representation. If geometry is stored as 16-bit coordinates, using Zipper instead of the Corner Table increases the size of the mesh that can be stored in core memory by a factor of about 8.
738

A Predictive Control Method for Human Upper-Limb Motion: Graph-Theoretic Modelling, Dynamic Optimization, and Experimental Investigations

Seth, Ajay January 2000 (has links)
Optimal control methods are applied to mechanical models in order to predict the control strategies in human arm movements. Optimality criteria are used to determine unique controls for a biomechanical model of the human upper-limb with redundant actuators. The motivation for this thesis is to provide a non-task-specific method of motion prediction as a tool for movement researchers and for controlling human models within virtual prototyping environments. The current strategy is based on determining the muscle activation levels (control signals) necessary to perform a task that optimizes several physical determinants of the model such as muscular and joint stresses, as well as performance timing. Currently, the initial and final location, orientation, and velocity of the hand define the desired task. Several models of the human arm were generated using a graph-theoretical method in order to take advantage of similar system topology through the evolution of arm models. Within this framework, muscles were modelled as non-linear actuator components acting between origin and insertion points on rigid body segments. Activation levels of the muscle actuators are considered the control inputs to the arm model. Optimization of the activation levels is performed via a hybrid genetic algorithm (GA) and a sequential quadratic programming (SQP) technique, which provides a globally optimal solution without sacrificing numerical precision, unlike traditional genetic algorithms. Advantages of the underlying genetic algorithm approach are that it does not require any prior knowledge of what might be a 'good' approximation in order for the method to converge, and it enables several objectives to be included in the evaluation of the fitness function. Results indicate that this approach can predict optimal strategies when compared to benchmark minimum-time maneuvers of a robot manipulator. The formulation and integration of the aforementioned components into a working model and the simulation of reaching and lifting tasks represents the bulk of the thesis. Results are compared to motion data collected in the laboratory from a test subject performing the same tasks. Discrepancies in the results are primarily due to model fidelity. However, more complex models are not evaluated due to the additional computational time required. The theoretical approach provides an excellent foundation, but further work is required to increase the computational efficiency of the numerical implementation before proceeding to more complex models.
739

A Parameterized Algorithm for Upward Planarity Testing of Biconnected Graphs

Chan, Hubert January 2003 (has links)
We can visualize a graph by producing a geometric representation of the graph in which each node is represented by a single point on the plane, and each edge is represented by a curve that connects its two endpoints. Directed graphs are often used to model hierarchical structures; in order to visualize the hierarchy represented by such a graph, it is desirable that a drawing of the graph reflects this hierarchy. This can be achieved by drawing all the edges in the graph such that they all point in an upwards direction. A graph that has a drawing in which all edges point in an upwards direction and in which no edges cross is known as an upward planar graph. Unfortunately, testing if a graph is upward planar is NP-complete. Parameterized complexity is a technique used to find efficient algorithms for hard problems, and in particular, NP-complete problems. The main idea is that the complexity of an algorithm can be constrained, for the most part, to a parameter that describes some aspect of the problem. If the parameter is fixed, the algorithm will run in polynomial time. In this thesis, we investigate contracting an edge in an upward planar graph that has a specified embedding, and show that we can determine whether or not the resulting embedding is upward planar given the orientation of the clockwise and counterclockwise neighbours of the given edge. Using this result, we then show that under certain conditions, we can join two upward planar graphs at a vertex and obtain a new upward planar graph. These two results expand on work done by Hutton and Lubiw. Finally, we show that a biconnected graph has at most <i>k</i>!8<sup><i>k</i>-1</sup> planar embeddings, where <i>k</i> is the number of triconnected components. By using an algorithm by Bertolazzi et al. that tests whether a given embedding is upward planar, we obtain a parameterized algorithm, where the parameter is the number of triconnected components, for testing the upward planarity of a biconnected graph. This algorithm runs in <i>O</i>(<i>k</i>!8<sup><i>k</i></sup><i>n</i><sup>3</sup>) time.
740

Multi-dimensional Interval Routing Schemes

Ganjali, Yashar January 2001 (has links)
Routing messages between pairs of nodes is one of the most fundamental tasks in any distributed computing system. An Interval Routing Scheme (IRS) is a well-known, space-efficient routing strategy for routing messages in a network. In this scheme, each node of the network is assigned an integer label and each link at each node is labeled with an interval. The interval assigned to a link l at a node v indicates the set of destination addresses of the messages which should be forwarded through l at v. When studying interval routing schemes, there are two main problems to be considered: a) Which classes of networks do support a specific routing scheme? b) Assuming that a given network supports IRS, how good are the paths traversed by messages? The first problem is known as the characterization problem and has been studied for several types of IRS. In this thesis, we study the characterization problem for various schemes in which the labels assigned to the vertices are d-ary integer tuples (d-dimensional IRS) and the label assigned to each link of the network is a list of d 1-dimensional intervals. This is known as Multi-dimensional IRS (MIRS) and is an extension of the the original IRS. We completely characterize the class of network which support MIRS for linear (which has no cyclic intervals) and strict (which has no intervals assigned to a link at a node v containing the label of v) MIRS. In real networks usually the costs of links may vary over time (dynamic cost links). We also give a complete characterization for the class of networks which support a certain type of MIRS which routes all messages on shortest paths in a network with dynamic cost links. The main criterion used to measure the quality of routing (the second problem) is the length of routing paths. In this thesis we also investigate this problem for MIRS and prove two lower bounds on the length of the longest routing path. These are the only known general results for MIRS. Finally, we study the relationship between various types of MIRS and the problem of drawing a hypergraph. Using some of our results we prove a tight bound on the number of dimensions of the space needed to draw a hypergraph.

Page generated in 0.0527 seconds