1 |
Resilience, Provisioning, and Control for the Network of the Future / Ausfallsicherheit, Dimensionierungsansätze und Kontrollmechanismen für das Netz der ZukunftMartin, Rüdiger January 2008 (has links) (PDF)
The Internet sees an ongoing transformation process from a single best-effort service network into a multi-service network. In addition to traditional applications like e-mail,WWW-traffic, or file transfer, future generation networks (FGNs) will carry services with real-time constraints and stringent availability and reliability requirements like Voice over IP (VoIP), video conferencing, virtual private networks (VPNs) for finance, other real-time business applications, tele-medicine, or tele-robotics. Hence, quality of service (QoS) guarantees and resilience to failures are crucial characteristics of an FGN architecture. At the same time, network operations must be efficient. This necessitates sophisticated mechanisms for the provisioning and the control of future communication infrastructures. In this work we investigate such echanisms for resilient FGNs. There are many aspects of the provisioning and control of resilient FGNs such as traffic matrix estimation, traffic characterization, traffic forecasting, mechanisms for QoS enforcement also during failure cases, resilient routing, or calability concerns for future routing and addressing mechanisms. In this work we focus on three important aspects for which performance analysis can deliver substantial insights: load balancing for multipath Internet routing, fast resilience concepts, and advanced dimensioning techniques for resilient networks. Routing in modern communication networks is often based on multipath structures, e.g., equal-cost multipath routing (ECMP) in IP networks, to facilitate traffic engineering and resiliency. When multipath routing is applied, load balancing algorithms distribute the traffic over available paths towards the destination according to pre-configured distribution values. State-of-the-art load balancing algorithms operate either on the packet or the flow level. Packet level mechanisms achieve highly accurate traffic distributions, but are known to have negative effects on the performance of transport protocols and should not be applied. Flow level mechanisms avoid performance degradations, but at the expense of reduced accuracy. These inaccuracies may have unpredictable effects on link capacity requirements and complicate resource management. Thus, it is important to exactly understand the accuracy and dynamics of load balancing algorithms in order to be able to exercise better network control. Knowing about their weaknesses, it is also important to look for alternatives and to assess their applicability in different networking scenarios. This is the first aspect of this work. Component failures are inevitable during the operation of communication networks and lead to routing disruptions if no special precautions are taken. In case of a failure, the robust shortest-path routing of the Internet reconverges after some time to a state where all nodes are again reachable – provided physical connectivity still exists. But stringent availability and reliability criteria of new services make a fast reaction to failures obligatory for resilient FGNs. This led to the development of fast reroute (FRR) concepts for MPLS and IP routing. The operations of MPLS-FRR have already been standardized. Still, the standards leave some degrees of freedom for the resilient path layout and it is important to understand the tradeoffs between different options for the path layout to efficiently provision resilient FGNs. In contrast, the standardization for IP-FRR is an ongoing process. The applicability and possible combinations of different concepts still are open issues. IP-FRR also facilitates a comprehensive resilience framework for IP routing covering all steps of the failure recovery cycle. These points constitute another aspect of this work. Finally, communication networks are usually over-provisioned, i.e., they have much more capacity installed than actually required during normal operation. This is a precaution for various challenges such as network element failures. An alternative to this capacity overprovisioning (CO) approach is admission control (AC). AC blocks new flows in case of imminent overload due to unanticipated events to protect the QoS for already admitted flows. On the one hand, CO is generally viewed as a simple mechanism, AC as a more complex mechanism that complicates the network control plane and raises interoperability issues. On the other hand, AC appears more cost-efficient than CO. To obtain advanced provisioning methods for resilient FGNs, it is important to find suitable models for irregular events, such as failures and different sources of overload, and to incorporate them into capacity dimensioning methods. This allows for a fair comparison between CO and AC in various situations and yields a better understanding of the strengths and weaknesses of both concepts. Such an advanced capacity dimensioning method for resilient FGNs represents the third aspect of this work. / Das Internet befindet sich gegenwärtig in einem Transformationsprozess von einem Netz mit einer einzigen best-effort Dienstklasse hin zu einem Mehr-Dienste-Netz. Zusätzlich zu herkömmlichen Anwendungen wie E-Mail, WWW oder Datenübertragung werden zukünftige Netze Dienste mit Echtzeitbedürfnissen und strikten Anforderungen an Verfügbarkeit und Zuverlässigkeit wie Voice over IP (VoIP), Videokonferenzdienste, Virtual Private Networks (VPNs) für Finanzanwendungen und andere Geschäftsanwendungen mit Echtzeitanforderungen, Tele-Medizin oder Telerobotik tragen. Daher sind die Gewährleistung von Dienstgüte und Ausfallsicherheit wesentliche Merkmale zukünftiger Netzarchitekturen. Gleichzeitig muss der Netzbetrieb effizient sein. Dies zieht den Bedarf an ausgefeilten Mechanismen für die Dimensionierung und Kontrolle ausfallsicherer Kommunikationsstrukturen nach sich. In dieser Arbeit werden solche Mechanismen, nämlich Lastaufteilung, Konzepte zur schnellen Reaktion im Fehlerfall und intelligente Ansätze zur Netzdimensionierung, untersucht.
|
2 |
Reconstruction of Protein Backbone with the £\-arbon CoordinatesWang, Jen-hui 28 August 2007 (has links)
Given an amino acid sequence with the £\-carbon 3D coordinates on its backbone, the all-atom protein backbone reconstruction problem (PBRP) is to rebuild the 3D coordinates of all atoms (N, C and O atoms) on the backbone. In this thesis, we propose a method for solving PBRP based on the homology modeling. First, we extract all consecutive 4-residue fragments from all protein structures in PDB. Each fragment is identified by its second, third and fourth residues. Thus, the fragments are classified into 8000 residue groups. In each residue group, the fragments with similar structures are clustered together. And one typical fragment is used to represent one cluster. These typical fragments form our fragment library. Then, we find out possible candidates in the fragment library to reconstruct the backbone of the target protein. To test the performance of our method, we use two testing sets of target proteins, one was proposed by Maupetit et al. [20] and the other is a
subset extracted from CASP7. We compare the experimental results of our method with three previous works MaxSprout, Adcock¡¦s method, and SABBAC proposed by Maupetit et al.. The reconstruction accuracy of our method is comparable to these previous works. And the solution of our method is more stable than the previous works in most target proteins. The time efficiency of our method is also satisfactory.
|
3 |
Refinement of All-atom Backbone Prediction of ProteinsChang, Hsiao-Yen 20 August 2008 (has links)
The all-atom protein backbone reconstruction problem (PBRP) is to rebuild the 3D coordinates of all atoms on the backbone which includes N, C, and O atoms. In the previous work, we find that the prediction accuracy of the 3D positions of the O atoms is not so good, compared with the other two atoms N and C. Thus, our goal is to refine the positions of the O atoms after the initial prediction of N, C, and O atoms on the backbone has been done by the previous work. Based on the AMBER force field, we modify the energy function to a simplified one with the statistical data on the bond lengths and bond angles of the 21 distinct amino acids (including the nonstandard one). Then, we propose a two-phase refinement method (TPRM) to find the position of each O atom independently that optimizes the modified energy function. We perform our method on two test sets of proteins. The experimental results show that the reconstruction accuracy of our method is better than the previous ones. The solution of our method is also more stable than most of the previous work. Besides, our method runs much faster than the famous prediction tool, SABBAC.
|
4 |
Resilience, Provisioning, and Control for the Network of the FutureMartin, Rüdiger January 2008 (has links)
Zugl.: Würzburg, Univ., Diss., 2008
|
5 |
Ab initio protein tertiary structure prediction using restricted ramachandran geometries and physio-chemical potentialsGibbs, Nicholas January 2001 (has links)
No description available.
|
6 |
Engineering Proteins via Peptide Backbone Mutagenesis: The Effects of Thioamide Linkages on the Folding and Stability of Short PeptidesDemick, Kristen Ann January 2009 (has links)
Thesis advisor: Jianmin Gao / The development of proteins/peptides as therapeutic agents has emerged as a promising area for drug design. Due to increased antibiotic resistance, search for novel antibiotics has become a primary area of interest within the pharmaceutical industry. Antimicrobial peptides have been a significantly desirable model due to their innate cytolytic effects and favorable interaction with the membranes of bacterial cells within the host. Thioxylated analogues of biologically active peptides have shown increased enzymatic stability and increased selectivity and potency. Thioamide linkages have thus been installed in a variety of short peptides, replacing the backbone amide linkage, in order to study the effects on peptide conformation and stability. Several bioanalytical tools were used in the analysis including circular dichroism spectroscopy, NMR, size-exclusion high performance liquid chromatography, and fluorescence. The mutation was well-accommodated within several systems, including Trpzip 4 and gramicidin A, and proved to have comparable, and in several cases, enhanced stability in comparison to the wild-type peptides. / Thesis (MS) — Boston College, 2009. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Chemistry.
|
7 |
Improvement of Protein All-atom Prediction with SVMYen, Hsin-Wei 07 September 2010 (has links)
There are many studies have been devoted to solve the all-atom protein back- bone reconstruction problem (PBRP), such as Adcock¡¦s method, MaxSprout, SAB- BAC and Chang¡¦s method. In the previous work, Wang et al. tried to solve this problem by homology modeling. Then, Chang et al. improved Wang¡¦s result by refining the positions of oxygen based on the AMBER force field. We compare the results in CASP7 and 8 from Chang et al. and SABBAC v1.2 and find that some proteins get better predicting results by Chang¡¦s method and others do better in SABBAC. Based on SVM, we propose a tool preference classification method for determining which tool is potentially the better one for predicting the structure of a target protein. We design a series of steps to select the better feature sets for SVM. Our method is tested on the proteins with standard amino acids in CASP7 and 8 dataset, which contains 30 and 24 protein sequences, respectively. The experimen- tal results show that our method has 7.39% and 2.94% RMSD improvement against Chang¡¦s result in CASP7 and 8, respectively. Our method can also be applied to other effective prediction methods, even if they will be developed in the future.
|
8 |
Backbone Ad Hoc Networks using Two-Tier RoutingLiao, Chun-kai 11 January 2005 (has links)
In this paper, a mobile network is combined with backbone structure to form a hierarchical ad hoc network. Usually, a mobile ad hoc network is assumed to be homogeneous such that each mobile node uses the same radio capability. However, a homogenous ad hoc network suffers from poor scalability. In this thesis we establish a physical (not logical) hierarchical network to solve this problem in which backbone nodes are employed to transmit for long distance using larger radio power at high tier and cluster structure is used to efficiently utilize resources in a wide and dynamic network. We propose a cluster head determination scheme based on the degree variations of nodes. The nodes with minimum degree variation in the neighborhood are considered more stable and will be selected as the cluster heads. The cluster heads form the backbone nodes and other nodes are the cluster members. The information of cluster members and the nodes in neighboring clusters are recorded in a table of cluster head. According to the information, we have the knowledge of whether the destination node is close to the source node and can determine how to route the transmission. Routing is divided into low tier and high tier routing to relieve the workload of backbone network. The simulation results demonstrate that the proposed hierarchical routing in two tiers (HRTT) improves some problems occurred in the flat network.
|
9 |
O problema da troca de mensagens de diferentes tamanhos em redes multi-aglomerados / The complete exchange of messages of different sizes between interconnected clusters using a backbone problem.Katayama, Fabio Massaaki 27 October 2006 (has links)
Com o aumento no uso de aglomerados e grades de computadores, cresce o interesse no estudo de comunicações entre processadores. Em um computador paralelo dedicado, ou em uma rede local homogênea, o tempo de comunicação é geralmente modelado de forma similar, independente de quais processadores estão se comunicando. Em uma rede onde os links entre os computadores são heterogêneos, computadores mais próximos tendem a apresentar menor latência e maior largura de banda do que computadores distantes. Além disso, a largura de banda agregada é diferente dependendo do número de conexões simultâneas existentes entre dois aglomerados distantes. Neste trabalho estudaremos a troca completa de mensagens de tamanhos diferentes entre aglomerados interligados por backbones. Proporemos um novo algoritmo de comunicação baseado em algoritmos conhecidos, apresentaremos simulações de escalonamentos dos algoritmos estudados para esta rede multi-aglomerado e analisaremos os resultados destas simulações. / The growth in popularity of clusters and computational grids caused an increase in the interest in studying interprocessors communications. The comunication time in a dedicated parallel computer or in a local homogeneous network is modeled in a similar way, regardless of which processors are communicating. In a network with heterogeneous links, closer computers generally have lower latency and larger bandwidth than wide area computers. Besides, the aggregated bandwidth depends on the number of simultaneous connections between two wide area clusters. In this work we study the complete exchange of messages of different sizes between interconnected clusters using a backbone. We propose a new comunication algorithm based on known algorithms, we present some scheduling simulations of the studied algorithms in this multi-cluster network and we present the results analysis of these simulations.
|
10 |
All-atom Backbone Prediction with Improved Tool Preference ClassificationChen, Kai-Yu 07 September 2011 (has links)
The all-atom protein backbone reconstruction problem (PBRP) is to reconstruct the 3D coordinates of all atoms, including N, C, and O atoms on the backbone, for a protein whose primary sequence and £\-carbon coordinates are given. A variety of methods for solving PBRP have been proposed, such as Adcock¡¦s method, SABBAC, BBQ, Chang¡¦s and Yen¡¦s methods. In a recent work, Yen et al. found that the results of Chang¡¦s method are not always better than SABBAC. So they apply a tool preference classification to determine which tool is more suitable for predicting the structure of the given protein. In this thesis, we involve BBQ (Backbone Building from Quadrilaterals) and Chang¡¦s method as our candidate prediction tools. In addition, the tool preferences of different atoms (N, C, O) are determined separately. We call the preference classification as an atom classifier, which is built by support vector machine (SVM). According to the preference classification of each atom classifier, a proper prediction tool, either BBQ or Chang¡¦s method, is used to construct the atom of the target protein. Thus, the combination of all atom results, the backbone structure of a protein is reconstructed. The datasets of our experiments are extracted from CASP7, CASP8, and CASP9, which consists of 30, 24, and 55 proteins, respectively. The proteins of the datasets contain only standard amino acids. We improve the average RMSDs of Yen¡¦s results from 0.4019 to 0.3682 in CASP7, from 0.4543 to 0.4202 in CASP8, and from 0.4155 to 0.3601 in CASP9.
|
Page generated in 0.059 seconds