Spelling suggestions: "subject:"verver"" "subject:"cerver""
1001 |
Simulating and prototyping software defined networking (sdn) using mininet approach to optimise host communication in realistic programmable networking environment optimise host communication in realistic programmable networking environment.Zulu, Lindinkosi Lethukuthula 19 August 2019 (has links)
This is a Masters student Final Dissertation / In this project, two tests were performed. On the first test, Mininet-WiFi was used to simulate a
Software Defined Network to demonstrate Mininet-WiFi’ s ability to be used as the Software
Defined Network emulator which can also be integrated to the existing network using a Network
Virtualized Function (NVF). A typical organization’s computer network was simulated which
consisted of a website hosted on the LAMP (Linux, Apache, MySQL, PHP) virtual machine, and
an F5 application delivery controller (ADC) which provided load balancing of requests sent to the
web applications. A website page request was sent from the virtual stations inside Mininet-WiFi.
The request was received by the application delivery controller, which then used round robin
technique to send the request to one of the web servers on the LAMP virtual machine. The web
server then returned the requested website to the requesting virtual stations using the simulated
virtual network. The significance of these results is that it presents Mininet-WiFi as an emulator,
which can be integrated into a real programmable networking environment offering a portable,
cost effective and easily deployable testing network, which can be run on a single computer. These
results are also beneficial to modern network deployments as the live network devices can also
communicate with the testing environment for the data center, cloud and mobile provides.
On the second test, a Software Defined Network was created in Mininet using python script. An
external interface was added to enable communication with the network outside of Mininet. The
amazon web services elastic computing cloud was used to host an OpenDaylight controller. This
controller is used as a control plane device for the virtual switch within Mininet. In order to test
the network, a webserver hosted on the Emulated Virtual Environment – Next Generation (EVENG)
software is connected to Mininet. EVE-NG is the Emulated Virtual Environment for
networking. It provides tools to be able to model virtual devices and interconnect them with other
virtual or physical devices. The OpenDaylight controller was able to create the flows to facilitate
communication between the hosts in Mininet and the webserver in the real-life network / The University of South Africa
The University of Johannesburg / College of Engineering, Science and Technology
|
1002 |
網際網路資料庫選擇模式之研究 / Internet Database Management and Systems Selection Study謝麗芬, Hsieh, Li-Fen Unknown Date (has links)
最近幾年來,網際網路的出現及應用面的廣泛,改變了企業的資訊架構。從Client/Server擴展到以Web為應用程式的平台,企業面臨了資訊架構的重整,企業網路、電子商務都是現今最新的應用。由於全球資訊網採開放架構,並以HTML、HTTP為標準,使得資訊業者有所依循,開發出許多產品來滿足企業的需求。隨著Web的廣泛使用,應用程式架構、介面與行為特質都將與以往有著大大的不同,元件及交易導向的Web應用程式是勢在必行的路。
對此變革,企業最重要的課題之一是,如何建構完整的企業網路環境以增強競爭力。資料庫系統是企業各種資訊軟體的基礎,在以Web為應用程式的平台的架構下,它仍是其中重要的關鍵元件。選擇資料庫系統必須考慮許多因素,而且現有的資料庫系統的產品很多,各項產品的特性與優點均有不同,每一種產品甚至可再細分成許多元件,可依照需要選用,因此資料庫產品的選擇變成一種複雜的過程。若選擇錯誤則不僅是金錢耗費的損失,更深遠地影響整個企業的順利運作,甚至是企業競爭力的下降,更可能嚴重到影響企業的存續。故資料庫系統的選擇不可不慎。
本文提出一網際網路資料庫體系選擇模式,協助企業在全球資訊網的開放架構及多階層應用系統環境下,評選出符合企業本身狀況及需求的網際網路資料庫系統。此模式內容包含,需求分析與確認、第一至四級網際網路資料庫體系需求屬性之彙整、兩階段的廠商篩選及含加權機制之廠商評比。
最後並將此模式運用於,行政院衛生署藥物食品檢驗局之購置網際網路資料庫設備計劃。 / Internet technology has drastically changed the enterprise computing and platform. Database management and systems represent the core of the change and the key of the new revolution of the information technology and infrastructure. Business information and models have been stored and manipulated through the use of the database technology. Due to the fast growing speed and variety of the database products in the marketplace, managers are having difficult making the right decision in selecting and maintaining the Internet database management and systems.
To tackle this issue, we propose a requirements-based software selection model from the user’s viewpoint. In this research, we develop a five-step choice model with an emphasis on the requirement analysis and rank analysis. We collect and compile the functional and non-functional characteristics and features of the Internet database management and systems. We classify and organize them into a four-layer hierarchy and work with the weight mechanism in the rank analysis. This choice model adopts another five-part rank policy in order to produce the final suggestion of software selection. In the end, we apply the new model in a field case study of the Web Database Systems Procurement Project with the National Laboratories of Foods and Drugs, Department of Health, Executive Yuan.
|
1003 |
Rapportsystem för Active Directory-information / REPORTSYSTEM FOR ACTIVE DIRECTORY INFORMATIONSjödahl, Fredrik January 2010 (has links)
<p>När det gäller fakturering av ett företags tjänster har det visat sig att den manuella hanteringen ofta är tidskrävande och att det lätt blir fel. Därför har det tagits fram många faktureringssystem för olika datorsystem. Detta examensarbete går ut på att ta fram en prototyp av ett automatiskt rapportsystem baserat på utvald användarinformation i Active Directory, informationen ska sedan användas som faktureringsunderlag. Informationen sammanställs i en databas där användaren på ett lätt sätt ska kunna ta fram en sammanställning av kundernas användning av diverse tjänster för en specifik domän.</p> / <p>When it comes to invoicing a company’s services it has become evident that the manual handling very often is time-consuming and easily becomes wrong. Therefore many developers have developed different invoicingsystems for different computersystems. This diploma work is about developing a prototype of a fully automatic reportsystem based on Active Directory-information. This information will later on be used as basic data for the invoice. The information will be put together in a database where the user easily can retrieve a compilation about a customer’s usage of different services.</p>
|
1004 |
Algorithmic Approaches For Protein-Protein Docking And quarternary Structure InferenceMitra, Pralay 07 1900 (has links)
Molecular interaction among proteins drives the cellular processes through the formation of complexes that perform the requisite biochemical function. While some of the complexes are obligate (i.e., they fold together while complexation) others are non-obligate, and are formed through macromolecular recognition. Macromolecular recognition in proteins is highly specific, yet it can be both permanent and non permanent in nature. Hallmarks of permanent recognition complexes include large surface of interaction, stabilization by hydrophobic interaction and other noncovalent forces. Several amino acids which contribute critically to the free energy of binding at these interfaces are called as “hot spot” residues. The non permanent recognition complexes, on the other hand, usually show small interface of interaction, with limited stabilization from non covalent forces. For both the permanent and non permanent complexes, the specificity of molecular interaction is governed by the geometric compatibility of the interaction surface, and the noncovalent forces that anchor them. A great deal of studies has already been performed in understanding the basis of protein macromolecular recognition.1; 2 Based on these studies efforts have been made to develop protein-protein docking algorithms that can predict the geometric orientation of the interacting molecules from their individual unbound states. Despite advances in docking methodologies, several significant difficulties remain.1 Therefore, in this thesis, we start with literature review to understand the individual merits and demerits of the existing approaches (Chapter 1),3 and then, we attempt to address some of the problems by developing methods to infer protein quaternary structure from the crystalline state, and improve structural and chemical understanding of protein-protein interactions through biological complex prediction.
The understanding of the interaction geometry is the first step in a protein-protein interaction study. Yet, no consistent method exists to assess the geometric compatibility of the interacting interface because of its highly rugged nature. This suggested that new sensitive measures and methods are needed to tackle the problem. We, therefore, developed two new and conceptually different measures using the Delaunay tessellation and interface slice selection to compute the surface complementarity and atom packing at the protein-protein interface (Chapter 2).4 We called these Normalized Surface Complementarity (NSc) and Normalized Interface Packing (NIP). We rigorously benchmarked the measures on the non redundant protein complexes available in the Protein Data Bank (PDB) and found that they efficiently segregate the biological protein-protein contacts from the non biological ones, especially those derived from X-ray crystallography. Sensitive surface packing/complementarity recognition algorithms are usually computationally expensive and thus limited in application to high-throughput screening. Therefore, special emphasis was given to make our measure compute-efficient as well. Our final evaluation showed that NSc, and NIP have very strong correlation among themselves, and with the interface area normalized values available from the Surface Complementarity program (CCP4 Suite: <http://smb.slac.stanford.edu/facilities/software/ccp4/html/sc.html>); but at a fraction of the computing cost.
After building the geometry based surface complementarity and packing assessment methods to assess the rugged protein surface, we advanced our goal to determine the stabilities of the geometrically compatible interfaces formed. For doing so, we needed to survey the quaternary structure of proteins with various affinities. The emphasis on affinity arose due to its strong relationship with the permanent and non permanent life-time of the complex. We, therefore, set up data mining studies on two databases named PQS (Protein Quaternary structure database: http://pqs.ebi.ac.uk) and PISA (Protein Interfaces, Surfaces and Assemblies: www.ebi.ac.uk/pdbe/prot_int/pistart.html) that offered downloads on quaternary structure data on protein complexes derived from X-ray crystallographic methods. To our surprise, we found that above mentioned databases provided the valid quaternary structure mostly for moderate to strong affinity complexes. The limitation could be ascertained by browsing annotations from another curated database of protein quaternary structure (PiQSi:5 supfam.mrc-lmb.cam.ac.uk/elevy/piqsi/piqsi_home.cgi) and literature surveys. This necessitated that we at first develop a more robust method to infer quaternary structures of all affinity available from the PDB. We, therefore, developed a new scheme focused on covering all affinity category complexes, especially the weak/very weak ones, and heteromeric quaternary structures (Chapter 3).6 Our scheme combined the naïve Bayes classifier and point-group symmetry under a Boolean framework to detect all categories of protein quaternary structures in crystal lattice. We tested it on a standard benchmark consisting of 112 recognition heteromeric complexes, and obtained a correct recall in 95% cases, which are significantly better than 53% achieved by the PISA,7 a state-of-art quaternary structure detection method hosted at the European Bioinformatics Institute, Hinxton, UK. A few cases that failed correct detection through our scheme, offered interesting insights into the intriguing nature of protein contacts in the lattice. The findings have implications for accurate inference of quaternary states of proteins, especially weak affinity complexes, where biological protein contacts tend to be sacrificed for the energetically optimal ones that favor the formation/stabilization of the crystal lattice. We expect our method to be used widely by all researchers interested in protein quaternary structure and interaction.
Having developed a method that allows us to sample all categories of quaternary structures in PDB, we set our goal in addressing the next problem that of accurately determining stabilities of the geometrically compatible protein surfaces involved in interaction. Reformulating the question in terms of protein-protein docking, we sought to ask how we could reliably infer the stabilities of any arbitrary interface that is formed when two protein molecules are brought sterically closer. In a real protein docking exercise this question is asked innumerable times during energy-based screening of thousands of decoys geometrically sampled (through rotation+translation) from the unbound subunits. The current docking methods face problems in two counts: (i), the number of interfaces from decoys to evaluate energies is rather large (64320 for a 9º rotation and translation for a dimeric complex), and (ii) the energy based screening is not quite efficient such that the decoys with native-like quaternary structure are rarely selected at high ranks. We addressed both the problems with interesting results.
Intricate decoy filtering approaches have been developed, which are either applied during the search stage or the sampling stage, or both. For filtering, usually statistical information, such as 3D conservation information of the interfacial residues, or similar facts is used; more expensive approaches screen for orientation, shape complementarity and electrostatics. We developed an interface area based decoy filter for the sampling stage, exploiting an assumption that native-like decoys must have the largest, or close to the largest, interface (Chapter 4).8 Implementation of this assumption and standard benchmarking showed that in 91% of the cases, we could recover native-like decoys of bound and unbound binary docking-targets of both strong and weak affinity. This allowed us to propose that “native-like decoys must have the largest, or close to the largest, interface” can be used as a rule to exclude non native decoys efficiently during docking sampling. This rule can dramatically clip the needle-in-a-haystack problem faced in a docking study by reducing >95% of the decoy set available from sampling search. We incorporated the rule as a central part of our protein docking strategy.
While addressing the question of energy based screening to rank the native-like decoys at high rank during docking, we came across a large volume of work already published. The mainstay of most of the energy based screenings that avoid statistical potential, involve some form of the Coulomb’s potential, Lennard Jones potential and solvation energy. Different flavors of the energy functions are used with diverse preferences and weights for individual terms. Interestingly, in all cases the energy functions were of the unnormalized form. Individual energy terms were simply added to arrive at a final score that was to be used for ranking. Proteins being large molecules, offer limited scope of applying semi-empirical or quantum mechanical methods for large scale evaluation of energy. We, therefore, developed a de novo empirical scoring function in the normalized form. As already stated, we found NSc and NIP to be highly discriminatory for segregating biological and non biological interface. We, therefore, incorporated them as parameters for our scoring function. Our data mining study revealed that there is a reasonable correlation of -0.73 between normalized solvation energy and normalized nonbonding energy (Coulombs + van der Waals) at the interface. Using the information, we extended our scoring function by combining the geometric measures and the normalized interaction energies. Tests on 30 unbound binary protein-protein complexes showed that in 16 cases we could identify at least one decoy in top three ranks with ≤10 Å backbone root-mean-square-deviation (RMSD) from true binding geometry. The scoring results were compared with other state-of-art methods, which returned inferior results. The salient feature of our scoring function was exclusion of any experiment guided restraints, evolutionary information, statistical propensities or modified interaction energy equations, commonly used by others. Tests on 118 less difficult bound binary protein-protein complexes with ≤35% sequence redundancy at the interface gave first rank in 77% cases, where the native like decoy was chosen among 1 in 10,000 and had ≤5 Å backbone RMSD from true geometry. The details about the scoring function, results and comparison with the other methods are extensively discussed in Chapter 5.9 The method has been implemented and made available for public use as a web server - PROBE (http://pallab.serc.iisc.ernet.in/probe). The development and use of PROBE has been elaborated in Chapter 7.10
On course of this work, we generated huge amounts of data, which is useful information that could be used by others, especially “protein dockers”. We, therefore, developed dockYard (http://pallab.serc.iisc.ernet.in/dockYard) - a repository for protein-protein docking decoys (Chapter 6).11 dockYard offers four categories of docking decoys derived from: Bound (native dimer co-crystallized), Unbound (individual subunits as well as the target are crystallized), Variants (match the previous two categories in at least one subunit with 100% sequence identity), and Interlogs (match the previous categories in at least one subunit with ≥90% or ≥50% sequence identity). There is facility for full or selective download based on search parameters. The portal also serves as a repository to modelers who may want to share their decoy sets with the community.
In conclusion, although we made several contributions in development of algorithms for improved protein-protein docking and quaternary structure inference, a lot of challenges remain (Chapter 8). The principal challenge arises by considering proteins as flexible bodies, whose conformational states may change on quaternary structure formation. In addition, solvent plays a major role in the free energy of binding, but its exact contribution is not straightforward to estimate. Undoubtedly, the cost of computation is one of the limiting factors apart from good energy functions to evaluate the docking decoys. Therefore, the next generation of algorithms must focus on improved docking studies that realistically incorporate flexibility and solvent environment in all their evaluations.
|
1005 |
License Management for EBIToolKrznaric, Anton January 2013 (has links)
This degree project deals with license management for EBITool. It´s about providing protection and monitoring for a Java Application via a license server, and the construction of it. An analysis that discusses the approach and other possible courses of action is also included. Additionally, it covers a discussion of a prototype implementation of the model solution from the analysis. The prototype is a Java EE application that deploys to JBoss AS7. It´s developed using the JBoss Developer Studio 5.0.0, an Eclipse IDE with JBoss Tools preinstalled. It exposes web services to Java Applications through SOAP via JAX-WS. Using Hibernate, the web service Enterprise Java Beans get access to a PostgreSQL 9.1 database via entity classes mapped to the database through the Java Persistence API.
|
1006 |
A knowledgebase of stress reponsive gene regulatory elements in arabidopsis ThalianaAdam, Muhammed Saleem January 2011 (has links)
<p>Stress responsive genes play a key role in shaping the manner in which plants process and respond to environmental stress. Their gene products are linked to DNA transcription and its consequent translation into a response product. However, whilst these genes play a significant role in manufacturing responses to stressful stimuli, transcription factors coordinate access to these genes, specifically by accessing a gene&rsquo / s promoter region which houses transcription factor binding sites. Here transcriptional elements play a key role in mediating responses to environmental stress where each transcription factor binding site may constitute a potential response to a stress signal. Arabidopsis thaliana, a model organism, can be used to identify the mechanism of how transcription factors shape a plant&rsquo / s survival in a stressful environment. Whilst there are numerous plant stress research groups, globally there is a shortage of publicly available stress responsive gene databases. In addition a number of previous databases such as the Generation Challenge Programme&rsquo / s comparative plant stressresponsive gene catalogue, Stresslink and DRASTIC have become defunct whilst others have stagnated. There is currently a single Arabidopsis thaliana stress response database called STIFDB which was launched in 2008 and only covers abiotic stresses as handled by major abiotic stress responsive transcription factor families. Its data was sourced from microarray expression databases, contains numerous omissions as well as numerous erroneous entries and has not been updated since its inception.The Dragon Arabidopsis Stress Transcription Factor database (DASTF) was developed in response to the current lack of stress response gene resources. A total of 2333 entries were downloaded from SWISSPROT, manually curated and imported into DASTF. The entries represent 424 transcription factor families. Each entry has a corresponding SWISSPROT, ENTREZ GENBANK and TAIR accession number. The 5&rsquo / untranslated regions (UTR) of 417 families were scanned against TRANSFAC&rsquo / s binding site catalogue to identify binding sites. The relational database consists of two tables, namely a transcription factor table and a transcription factor family table called DASTF_TF and TF_Family respectively. Using a two-tier client-server architecture, a webserver was built with PHP, APACHE and MYSQL and the data was loaded into these tables with a PYTHON script. The DASTF database contains 60 entries which correspond to biotic stress and 167 correspond to abiotic stress while 2106 respond to biotic and/or abiotic stress. Users can search the database using text, family, chromosome and stress type search options. Online tools have been integrated into the DASTF  / database, such as HMMER, CLUSTALW, BLAST and HYDROCALCULATOR. User&rsquo / s can upload sequences to identify which transcription factor family their sequences belong to by using HMMER. The website can be accessed at http://apps.sanbi.ac.za/dastf/ and two updates per year are envisaged.</p>
|
1007 |
Planificación, análisis y optimización de sistemas distribuidos de tiempo real estrictoGutiérrez García, José Javier 27 October 1995 (has links)
La Tesis presenta el desarrollo de una metodología de análisis y diseño de sistemas distribuidos de tiempo real estricto, y su aplicación a una implementación práctica en lenguaje Ada.Se han optimizado los métodos existentes para la planificación y análisis de sistemas distribuidos de tiempo real mediante un algoritmo heurístico para la asignación de prioridades, y la aplicación del algoritmo de servidor esporádico a la planificación de redes de comunicación de tiempo real. También se ha ampliado el campo de aplicación del análisis a sistemas más complejos en los que existe sincronización por intercambio de eventos o paso de mensajes.Se ha demostrado que la metodología propuesta se puede implementar en sistemas de tiempo real prácticos, a través de su aplicación a sistemas distribuidos programados en lenguaje Ada. / The Thesis presents a methodology to analyze and design distributed real-time systems, and its application to a practical implementation.Existing methods for scheduling and analyzing distributed real-time systems have been optimized through a new heuristic algorithm for assigning priorities, and with the application of the sporadic server algorithm for scheduling real-time communication networks. The area of application of the analysis has been extended to more complex systems, like those with synchronization through event exchange or message passing.It has been demonstrated that the proposed methodology can be implemented in practical real-time systems, through the application to a distributed system programmed in the Ada language.
|
1008 |
Algorithms For Efficient Implementation Of Secure Group Communication SystemsRahul, S 11 1900 (has links)
A distributed application may be considered as a set of nodes which are spread across the network, and need to communicate with each other. The design and implementation of these distributed applications is greatly simplified using Group Communication Systems (GCSs) which provide multipoint to multipoint communication. Hence, GCSs can be used as building blocks for implementing distributed applications. The GCS is responsible for reliable delivery of group messages and management of group membership. The peer-to-peer model and the client-server model are the two models of distributed systems for implementing GCSs. In this thesis, our focus is on improving the capability of GCS based on the client-server model.
Security is an important requirement of many distributed applications. For such applications, security has to be provided m the GCS itself. The security of a GCS includes confidentiality, authentication and non-repudiation of messages, and ensuring that the GCS is properly meeting its guarantees. The complexity and cost of implementation of the above three types of security guarantees greatly depend on whether the GCS servers are trusted by the group members or not. Making use of the GCS services provided by untrusted GCS servers becomes necessary when the GCS servers are managed by a third party. In this thesis, we have proposed algorithms for ensuring the above three security guarantees for GCSs in which servers are not trusted. As part of the solution, we have proposed a new digital multisignature scheme which allows group members to verify that a message has indeed been signed by all group members.
The various group key management algorithms proposed in literature differ from each other with respect to the following four metrics: communication overhead, computational overhead, storage at each member and distribution of load among group members. We identify the need for a distributed group key management algorithm which minimizes the computational overhead on group members and propose an algorithm to achieve it.
|
1009 |
Caching Techniques For Dynamic Web ServersSuresha, * 07 1900 (has links)
Websites are shifting from static model to dynamic model, in order to deliver their users with dynamic, interactive, and personalized experiences. However, dynamic content generation comes at a cost – each request requires computation as well as communication across multiple components within the website and across the Internet. In fact, dynamic pages are constructed on the fly, on demand. Due to their construction overheads and non-cacheability, dynamic pages result in substantially increased user response times, server load and increased bandwidth consumption, as compared to static pages. With the exponential growth of Internet traffic and with websites becoming increasingly complex, performance and scalability have become major bottlenecks for dynamic websites.
A variety of strategies have been proposed to address these issues. Many of these solutions perform well in their individual contexts, but have not been analyzed in an integrated fashion. In our work, we have carried out a study of combining a carefully chosen set of these approaches and analyzed their behavior. Specifically, we consider solutions based on the recently-proposed fragment caching technique, since it ensures both correctness and freshness of page contents. We have developed mechanisms for reducing bandwidth consumption and dynamic page construction overheads by integrating fragment caching with various techniques such as proxy-based caching of dynamic contents, pre-generating pages, and caching program code.
We start with presenting a dynamic proxy caching technique that combines the benefits of both proxy-based and server-side caching approaches, without suffering from their individual limitations. This technique concentrates on reducing the bandwidth consumption due to dynamic web pages. Then, we move on to presenting mechanisms for reducing dynamic page construction times -- during normal loading, this is done through a hybrid technique of fragment caching and page pre-generation, utilizing the excess capacity with which web servers are typically provisioned to handle peak loads. During peak loading, this is achieved by integrating fragment-caching and code-caching, optionally augmented with page pre-generation.
In summary, we present a variety of methods for integrating existing solutions for serving dynamic web pages with the goal of achieving reduced bandwidth consumption from the web infrastructure perspective, and reduced page construction times from user perspective.
|
1010 |
A knowledgebase of stress reponsive gene regulatory elements in arabidopsis ThalianaAdam, Muhammed Saleem January 2011 (has links)
<p>Stress responsive genes play a key role in shaping the manner in which plants process and respond to environmental stress. Their gene products are linked to DNA transcription and its consequent translation into a response product. However, whilst these genes play a significant role in manufacturing responses to stressful stimuli, transcription factors coordinate access to these genes, specifically by accessing a gene&rsquo / s promoter region which houses transcription factor binding sites. Here transcriptional elements play a key role in mediating responses to environmental stress where each transcription factor binding site may constitute a potential response to a stress signal. Arabidopsis thaliana, a model organism, can be used to identify the mechanism of how transcription factors shape a plant&rsquo / s survival in a stressful environment. Whilst there are numerous plant stress research groups, globally there is a shortage of publicly available stress responsive gene databases. In addition a number of previous databases such as the Generation Challenge Programme&rsquo / s comparative plant stressresponsive gene catalogue, Stresslink and DRASTIC have become defunct whilst others have stagnated. There is currently a single Arabidopsis thaliana stress response database called STIFDB which was launched in 2008 and only covers abiotic stresses as handled by major abiotic stress responsive transcription factor families. Its data was sourced from microarray expression databases, contains numerous omissions as well as numerous erroneous entries and has not been updated since its inception.The Dragon Arabidopsis Stress Transcription Factor database (DASTF) was developed in response to the current lack of stress response gene resources. A total of 2333 entries were downloaded from SWISSPROT, manually curated and imported into DASTF. The entries represent 424 transcription factor families. Each entry has a corresponding SWISSPROT, ENTREZ GENBANK and TAIR accession number. The 5&rsquo / untranslated regions (UTR) of 417 families were scanned against TRANSFAC&rsquo / s binding site catalogue to identify binding sites. The relational database consists of two tables, namely a transcription factor table and a transcription factor family table called DASTF_TF and TF_Family respectively. Using a two-tier client-server architecture, a webserver was built with PHP, APACHE and MYSQL and the data was loaded into these tables with a PYTHON script. The DASTF database contains 60 entries which correspond to biotic stress and 167 correspond to abiotic stress while 2106 respond to biotic and/or abiotic stress. Users can search the database using text, family, chromosome and stress type search options. Online tools have been integrated into the DASTF  / database, such as HMMER, CLUSTALW, BLAST and HYDROCALCULATOR. User&rsquo / s can upload sequences to identify which transcription factor family their sequences belong to by using HMMER. The website can be accessed at http://apps.sanbi.ac.za/dastf/ and two updates per year are envisaged.</p>
|
Page generated in 0.0276 seconds