• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 492
  • 92
  • 71
  • 61
  • 36
  • 21
  • 19
  • 18
  • 13
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 1014
  • 679
  • 258
  • 180
  • 130
  • 125
  • 117
  • 96
  • 81
  • 80
  • 79
  • 77
  • 66
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Next generation approaches toward engineering therapeutic proteases

Pogson, Mark Wilson 13 November 2013 (has links)
Engineering protease substrate specificity and selectivity has the potential to yield entirely new possibilities in the analytical, biotechnological, and therapeutic domains. For example, therapeutic applications can be envisioned in which engineered proteases could replace antibodies by irreversibly inactivating a large excess of disease-associated target proteins in a catalytic fashion. Technological advances in molecular biology have made laboratory-based evolution techniques for protein engineering readily accessible. However, the ability to interrogate the activities and substrate preference of large numbers of protease variants is predicated on the availability of quantitative high-throughput assays that maintain the essential link between genotype and phenotype. In this work we have investigated a variety of novel single cell fluorescence assays and selections for engineering protease substrate specificity and selectivity, and demonstrated the utility of some of these systems for the engineering of novel enzymes. The second chapter of this dissertation reports the isolation of a highly active ([chemical formula]) variant of the Escherichia coli endopeptidase OmpT that selectively hydrolyzes peptides after 3-nitrotyrosine while effectively discriminating against similar peptides containing unmodified tyrosine, sulfotyrosine, phosphotyrosine and phosphoserine. The isolation of protease variants that can discriminate between substrates based on the posttranslational modification of Tyr was made possible by implementing a multi-color flow cytometric assay using multiple simultaneous counter-selection substrates for the screening of large mutant libraries. While primary sequence recognition may suffice for proteomic applications, many therapeutic applications of engineered proteases will require the cleavage of folded protein targets. Unfortunately, we have found that engineered proteases that can cleave peptides very efficiently are often unable to digest the same sequences inserted into the loop regions of a folded protein. The logical conclusion, then, is that an entire target protein or at least a protein domain, rather than peptide segments, must be incorporated into protease engineering screening assays. As a critical first step toward the development of next generation, single cell screening systems for therapeutic protease engineering we have developed novel assays that exploit cell surface capture of exogenous protein substrates. One assay (Chapter 3) relies on an autoinhibited protein fusion that capitalizes on the p53 antagonist MDM2 as a detector of protease activity in addition to its utility as a counter-selection substrate. Using this system we successfully isolated OmpT variants that selectively cleave a designed site within our autoinhibited substrate. A second high-throughput screen (Chapter 4) monitors native protein cleavage. Target proteins are captured at the cell surface using a polycationic tail, incorporating counter-selection, and the proteolytic state of the substrate can be monitored using epitope tags fused to the N-and C-termini and fluorescently labeled anti-epitope tag antibodies. / text
122

Performance-efficient mechanisms for managing irregularity in throughput processors

Rhu, Minsoo 01 July 2014 (has links)
Recent graphics processing units (GPUs) have emerged as a promising platform for general purpose computing and have been shown to be very efficient in executing parallel applications with regular control and memory access behavior. Current GPU architectures primarily adopt the single-instruction multiple-thread (SIMT) programming model that balances programmability and hardware efficiency. With SIMT, the programmer writes application code to be executed by scalar threads and each thread is supported with conditional branch and fine-grained load/store instruction for ease of programming. At the same time, the hardware and software collaboratively enable the grouping of scalar threads to be executed in a vectorized single-instruction multiple-data (SIMD) in-order pipeline, simplifying hardware design. As GPUs gain momentum in being utilized in various application domains, these throughput processors will increasingly demand more efficient execution of irregular applications. Current GPUs, however, suffer from reduced thread-level parallelism, underutilization of compute resources, inefficient on-chip caching, and waste in off-chip memory bandwidth utilization for highly irregular programs with divergent control and memory accesses. In this dissertation, I develop techniques that enable simple, robust, and highly effective performance optimizations for SIMT-based throughput processor architectures such that they can better manage irregularity. I first identify that previously suggested optimizations to the divergent control flow problem suffers from the following limitations: 1) serialized execution of diverging paths, 2) lack of robustness across regular/irregular codes, and 3) limited applicability. Based on such observations, I propose and evaluate three novel mechanisms that resolve the aforementioned issues, providing significant performance improvements while minimizing implementation overhead. In the second half of the dissertation, I observe that conventional coarse-grained memory hierarchy designs do not take into account the massively multi-threaded nature of GPUs, which leads to substantial waste in off-chip memory bandwidth utilization. I design and evaluate a locality-aware memory hierarchy for throughput processors, which retains the advantages of coarse-grained accesses for spatially and temporally local programs while permitting selective fine-grained access to memory. By adaptively adjusting the access granularity, memory bandwidth and energy consumption are reduced for data with low spatial/temporal locality without wasting control overheads or prefetching potential for data with high spatial locality. / text
123

Genetic and Functional Studies of Non-Coding Variants in Human Disease

Alston, Jessica Shea January 2012 (has links)
Genome-wide association studies (GWAS) of common diseases have identified hundreds of genomic regions harboring disease-associated variants. Translating these findings into an improved understanding of human disease requires identifying the causal variants(s) and gene(s) in the implicated regions which, to date, has only been accomplished for a small number of associations. Several factors complicate the identification of mutations playing a causal role in disease. First, GWAS arrays survey only a subset of known variation. The true causal mutation may not have been directly assayed in the GWAS and may be an unknown, novel variant. Moreover, the regions identified by GWAS may contain several genes and many tightly linked variants with equivalent association signals, making it difficult to decipher causal variants from association data alone. Finally, in many cases the variants with strongest association signals map to non-coding regions that we do not yet know how to interpret and where it remains challenging to predict a variants likely phenotypic impact. Here, we present a framework for the genetic and functional study of intergenic regions identified through GWAS and describe application of this framework to chromosome 9p21: a non-coding region with associations to type 2 diabetes (T2D), myocardial infarction (MI), aneurysm, glaucoma, and multiple cancers. First, we compare methods for genetic fine-mapping of GWAS associations, including methods for creating a more comprehensive catalog of variants in implicated regions and methods for capturing these variants in case- control cohorts. Next, we describe an approach for using massively parallel reporter assays (MPRA) to systematically identify regulatory elements and variants across disease-associated regions. On chromosome 9p21, we fine-map the T2D and MI associations and identify, for each disease, a collection of common variants with equivalent association signals. Using MPRA, we identify hundreds of regulatory elements on chromosome 9p21 and multiple variants (including MI- and T2D-associated variants) with evidence for allelic effects on regulatory activity that can serve as a foundation for further study. More generally, the methods presented here have broad potential application to the many intergenic regions identified through GWAS and can help to uncover the mechanisms by which variants in these regions influence human disease.
124

Novel high-throughput screening methods for the engineering of hydrolases

Gebhard, Mark Christopher 15 June 2011 (has links)
Enzyme engineering relies on changes in the amino acid sequence of an enzyme to give rise to improvements in catalytic activity, substrate specificity, thermostability, and enantioselectivity. However, beneficial amino acid substitutions in proteins are difficult to rationally predict. Large numbers of enzyme variants containing random amino acid substitutions are screened in a high throughput manner to isolate improved enzymes. Identifying improved enzymes from the resulting library of randomized variants is a current challenge in protein engineering. This work focuses on the development of high-throughput screens for a class of enzymes called hydrolases, and in particular, proteases and esterases. In the first part of this work, we have developed an assay for detecting protease activity in the cytoplasm of Escherichia coli by exploiting the SsrA protein degradation pathway and flow cytometry. In this method, a protease-cleavable linker is inserted between a fusion protein consisting of GFP and the SsrA degradation tag. The SsrA-tagged fusion protein is degraded in the cell unless a co-expressed protease cleaves the linker conferring higher cellular fluorescence. The assay can detect specific cleavage of substrates by TEV protease and human caspase-8. To apply the screen for protease engineering, we sought to evolve a TEV protease variant that has altered P1 specificity. However, in screening enzyme libraries, the clones we recovered were found to be false positives in that they did not express protease variants with the requisite specificities. These experiments provided valuable information on physiological and chemical parameters that can be employed to optimize the screen for directed evolution of novel protease activities. In the second part of this work, single bacterial cells, expressing an esterase in the periplasm, were compartmentalized in aqueous droplets of a water-in-oil emulsion also containing a fluorogenic ester substrate. The primary water-in-oil emulsion was then re-emulsified to form a water-in-oil-in-water double emulsion which was capable of being analyzed and sorted by flow cytometry. This method was used to enrich cells expressing an esterase with activity towards fluorescein dibutyrate from an excess of cells expressing an esterase with no activity. A 50-fold enrichment was achieved in one round of sorting, demonstrating the potential of this method for use as a high-throughput screen for esterase activity. This method is suitable for engineering esterases with novel catalytic specificities or higher stabilit / text
125

Elucidation and optimization of molecular factors for dendritic cell responses to surface presented glycans

Hotaling, Nathan Alexander 27 August 2014 (has links)
Dendritic cells (DCs) are regulators of the immune system and express a class of pattern recognition receptors known as C-type lectin receptors (CLRs) to recognize and respond to carbohydrates (glycans). Dendritic cells are hypothesized to be key mediators in the immune response to implanted materials and ligation of CLRs has been shown to have diverse effects on DC phenotype ranging from tolerogenic to pro-inflammatory. Thus, designing future biomaterials and combination products that harness the potential of CLR ligation on DCs has great promise. Additionally, many of the proteins which adsorb to biomaterials when implanted are glycosylated and thus understanding this interaction would provide further insight into the host response to currently implanted materials. However, DC responses to glycans presented from non-phagocytosable surfaces has not been well characterized and optimal factors for DC phenotype modulation by surface presented glycans are unknown. Additionally, studies relating DC response to glycan structures from soluble and phagocytosable displays to that of non-phagocytosable display have not been performed. This is of critical importance to the field because of the extremely limited supply of complex glycan structures that are able to be obtained. Because of this limited supply of glycans the trend in glycomics has been toward creation of glycan microarrays to assess initial candidates of interest for further study. However, the assumption that cell response to these glycoconjugate microarrays is equivalent to soluble or phagocytosable conjugates has not been validated. Therefore, the purpose of this study was to 1) determine the optimal molecular contextual variables of glycoconjugate presentation from a non-phagocytosable surface, namely, charge, density, and glycan structure for modulating DC phenotype; and 2) determine if modality of glycoconjugate presentation, i.e. soluble, phagocytosable, or non-phagocytosable will modulate DC phenotype differentially. To determine the effect of the molecular contextual variables primary human immature DCs (iDCs) were exposed to a range of adsorbed glycoconjugates in a 384 well plate and their subsequent phenotype assessed via a novel in house produced high throughput (HTP) assay. Bovine serum albumin (BSA) was modified to have a range of glycan densities and isoelectric points to determine which of these were optimal for DC phenotype modulation. Next, several poly-mannose structures were presented to DCs to determine if DC response was structure specific. Finally, contextual variables were modeled in a multivariate general linear model to determine underlying trends in DC behavior and optimal factors for glycan presentation from non-phagocytosable surfaces. To determine the effect of the modality of glycoconjugate display on DCs, optimized glycoconjugates from 1) were adsorbed to the wells of a 384 flat well plate, delivered at varying soluble concentrations, or adsorbed to phagocytosable 1 µm beads and subsequent DC phenotype assessed via the HTP assay. The cell response to the glycoconjugates was then validated to be CLR mediated and the DC response to glycan modality was modeled in another general linear model. Results from these studies show that highly cationized high density glycoconjugates presented from non-phagocytosable flat well display modulate DC phenotype toward a pro-inflammatory phenotype to the greatest extent. Additionally, significant impacts on DC phenotype in response to adsorbed conjugates can be seen when grouping glycan structure by terminal glycan motif. Finally, DC response to glycoconjugates were found to be CLR mediated and that each modality of glycan display is significantly different, in terms of DC phenotype, from the others. These results provide indications for the future design of glycan microarray systems, biomaterials and combination products. Furthermore, this work indicates that different mechanisms are involved in binding and processing of surface bound versus soluble glycoconjugates. With further study these differences could be harnessed for use in the next generation of biomaterials.
126

Ανάλυση αρχιτεκτονικής προσαρμοστών δικτύου

Δάτσιος, Χρυσοβαλάντης-Ζαχαρίας 03 August 2009 (has links)
Βασικό ζητούμενο στον τομέα των δικτύων υψηλής ταχύτητας είναι αυτό της “διατήρησης της διαπερατότητας”. Η προσπάθεια επίτευξης του ζητούμενου αυτού στην περίπτωση παραδοσιακών δικτύων που χρησιμοποιούν προτυποποιημένα πρωτόκολλα μπορεί να γίνει μόνο στην κατεύθυνση του σχεδιασμού και της υλοποίησης του συστήματος επεξεργασίας που εκτελεί τα πρωτόκολλα αφού οι μηχανισμοί και η σύνταξη των ίδιων των πρωτοκόλλων είναι ήδη καθορισμένα. Στην παρούσα εργασία διερευνώνται τρόποι επίτευξης του ζητούμενου αυτού για το πιο βασικό από τα δικτυακά υποσυστήματα, του προσαρμογέα δικτύου. Οι σχεδιαστικές αποφάσεις που αφορούν την αρχιτεκτονική ενός network adapter παίζουν καταλυτικό ρόλο στην απόδοσή του. Στις επόμενες παραγράφους θα παρουσιαστεί η αρχιτεκτονική ενός απλού δομικά αλλά και στην σύλληψή του προσαρμογέα και στην συνέχεια θα διερευνηθούν διάφορες αρχιτεκτονικές παρεμβάσεις που στόχο θα έχουν την βελτίωση της απόδοσης αυτού. Με την βοήθεια της βιβλιοθήκης εξομοίωσης CSIM της Mesquite οι αρχιτεκτονικές που θα προκύψουν μοντελοποιούνται και δίνοντας στα χαρακτηριστικά τους τιμές προϊόντων που διατίθενται αυτή την στιγμή στην αγορά υπολογίζεται η βελτίωση που επιφέρουν. / One of the main challenges in high-throughput networking is the “throughput preservation problem”. The effort to address this problem in the case of standardized protocols should be focused in the design and implementation of the protocol processing systems, for the mechanisms and syntax are already well defined in a standardized protocol. This paper explores the ways of preserving the throughput of network adapters, the most basic of communication systems. Design and architectural issues of network adapters affect their performance characteristics. In the following sections the architecture of a simple structurally and conceptually network adapter will be presented. Next, several architectural modifications will be imposed in order to ameliorate its performance. By using Mesquite's CSIM modeling and simulating tool, models of the produced network adapter configurations will be implemented and their performance will be measured. These measurements will be produced by giving in the models' characteristics values of currently available in market products.
127

Development and application of a rapid micro-scale method of lignin content determination in Arabidopsis thaliana accessions

Chang, Xue Feng 05 1900 (has links)
Lignin is a major chemical component of plants and the second most abundant natural polymer after cellulose. The concerns and interests of agriculture and industry have stimulated the study of genes governing lignin content in plants in an effort to adapt plants to human purposes. Arabidopsis thaliana provides a convenient model for the study of the genes governing lignin content because of its short growth cycle, small plant size, and small completely sequenced genome. In order to identify the genes controlling lignin content in Arabidopsis accessions using Quantitative Trait Locus (QTL) analysis, a rapid micro-scale method of lignin determination is required. The acetyl bromide method has been modified to enable the rapid micro-scale determination of lignin content in Arabidopsis. Modifications included the use of a micro-ball mill, adoption of a modified rapid method of extraction, use of an ice-bath to stabilize solutions and reduction in solution volumes. The modified method was shown to be accurate and precise with values in agreement with those determined by the conventional method. The extinction coefficient for Arabidopsis lignin, dissolved using acetyl bromide, was determined to be 23.35 g-iLcm-1. This value is independent of the Arabidopsis accession, environmental growth conditions and is insensitive to syringyl/guaiacyl ratio. The modified acetyl bromide method was shown to be well correlated with the 72% sulfuric acid method once the latter had been corrected for protein contamination and acid-soluble lignin content (R² = 0.988, P < 0.0001). As determined by the newly developed acetyl bromide method and confirmed by the sulfuric acid method, lignin content in Arabidopsis was found to be a divergent property. Lignin content in Arabidopsis was found to be weekly correlated with growth rate among Arabidopsis accessions (R² = 0.48, P = 0.011). Lignin content was also found to be correlated with plant height among Arabidopsis accessions (R² = 0.491, P < 0.0001).
128

Parallel Computing in Statistical-Validation of Clustering Algorithm for the Analysis of High throughput Data

Atlas, Mourad 12 May 2005 (has links)
Currently, clustering applications use classical methods to partition a set of data (or objects) in a set of meaningful sub-classes, called clusters. A cluster is therefore a collection of objects which are “similar” among them, thus can be treated collectively as one group, and are “dissimilar” to the objects belonging to other clusters. However, there are a number of problems with clustering. Among them, as mentioned in [Datta03], dealing with large number of dimensions and large number of data items can be problematic because of computational time. In this thesis, we investigate all clustering algorithms used in [Datta03] and we present a parallel solution to minimize the computational time. We apply parallel programming techniques to the statistical algorithms as a natural extension to sequential programming technique using R. The proposed parallel model has been tested on a high throughput dataset. It is microarray data on the transcriptional profile during sporulation in budding yeast. It contains more than 6,000 genes. Our evaluation includes clustering algorithm scalability pertaining to datasets with varying dimensions, the speedup factor, and the efficiency of the parallel model over the sequential implementation. Our experiments show that the gene expression data follow the pattern predicted in [Datta03] that is Diana appears to be solid performer also the group means for each cluster coincides with that in [Datta03]. We show that our parallel model is applicable to the clustering algorithms and more useful in applications that deal with high throughput data, such as gene expression data.
129

An optimisation approach to improve the throughput in wireless mesh networks through network coding / van der Merwe C.

Van der Merwe, Corna January 2011 (has links)
In this study, the effect of implementing Network Coding on the aggregated throughput in Wireless Mesh Networks, was examined. Wireless Mesh Networks (WMNs) are multiple hop wireless networks, where routing through any node is possible. The implication of this characteristic, is that messages flow across the points where it would have been terminated in conventional wireless networks. User nodes in conventional wireless networks only transmit and receive messages from an Access Point (AP), and discard any messages not intended for them. The result is an increase in the volume of network traffic through the links of WMNs. Additionally, the dense collection of multiple RF signals propagating through a shared wireless medium, contributes to the situation where the links become saturated at levels below their capacity. The need exists to examine methods that will improve the utilisation of the shared wireless medium in WMNs. Network Coding is a coding and decoding technique at the network level of the OSI stack, aimed to improve the boundaries of saturated links. The technique implies that the bandwidth is simultaneously shared amongst separate message flows, by combining these flows at common intermediate nodes. The number of transmissions needed to convey information through the network, is decreased by Network Coding. The result is in an improvement of the aggregated throughput. The research approach followed in this dissertation, includes the development of a model that investigates the aggregated throughput performance of WMNs. The scenario of the model, followed a typical example of indoors WMN implementations. Therefore, the physical environment representation of the network elements, included an indoors log–distance path loss channel model, to account for the different effects such as: power absorption through walls; and shadowing. Network functionality in the model was represented through a network flow programming problem. The problem was concerned with determining the optimal amount of flow represented through the links of the WMN, subject to constraints pertaining to the link capacities and mass balance at each node. The functional requirements of the model stated that multiple concurrent sessions were to be represented. This condition implied that the network flow problem had to be a multi–commodity network flow problem. Additionally, the model requirements stated that each session of flow should remain on a single path. This condition implied that the network flow problem had to be an integer programming problem. Therefore, the network flow programming problem of the model was considered mathematically equivalent to a multi–commodity integer programming problem. The complexity of multi–commodity integer programming problems is NP–hard. A heuristic solving method, Simulated Annealing, was implemented to solve the goal function represented by the network flow programming problem of the model. The findings from this research provide evidence that the implementation of Network Coding in WMNs, nearly doubles the level of the calculated aggregated throughput values. The magnitude of this throughput increase, can be further improved by additional manipulation of the network traffic dispersion. This is achieved by utilising link–state methods, rather than distance vector methods, to establish paths for the sessions of flow, present in the WMNs. / Thesis (M.Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2012.
130

An optimisation approach to improve the throughput in wireless mesh networks through network coding / van der Merwe C.

Van der Merwe, Corna January 2011 (has links)
In this study, the effect of implementing Network Coding on the aggregated throughput in Wireless Mesh Networks, was examined. Wireless Mesh Networks (WMNs) are multiple hop wireless networks, where routing through any node is possible. The implication of this characteristic, is that messages flow across the points where it would have been terminated in conventional wireless networks. User nodes in conventional wireless networks only transmit and receive messages from an Access Point (AP), and discard any messages not intended for them. The result is an increase in the volume of network traffic through the links of WMNs. Additionally, the dense collection of multiple RF signals propagating through a shared wireless medium, contributes to the situation where the links become saturated at levels below their capacity. The need exists to examine methods that will improve the utilisation of the shared wireless medium in WMNs. Network Coding is a coding and decoding technique at the network level of the OSI stack, aimed to improve the boundaries of saturated links. The technique implies that the bandwidth is simultaneously shared amongst separate message flows, by combining these flows at common intermediate nodes. The number of transmissions needed to convey information through the network, is decreased by Network Coding. The result is in an improvement of the aggregated throughput. The research approach followed in this dissertation, includes the development of a model that investigates the aggregated throughput performance of WMNs. The scenario of the model, followed a typical example of indoors WMN implementations. Therefore, the physical environment representation of the network elements, included an indoors log–distance path loss channel model, to account for the different effects such as: power absorption through walls; and shadowing. Network functionality in the model was represented through a network flow programming problem. The problem was concerned with determining the optimal amount of flow represented through the links of the WMN, subject to constraints pertaining to the link capacities and mass balance at each node. The functional requirements of the model stated that multiple concurrent sessions were to be represented. This condition implied that the network flow problem had to be a multi–commodity network flow problem. Additionally, the model requirements stated that each session of flow should remain on a single path. This condition implied that the network flow problem had to be an integer programming problem. Therefore, the network flow programming problem of the model was considered mathematically equivalent to a multi–commodity integer programming problem. The complexity of multi–commodity integer programming problems is NP–hard. A heuristic solving method, Simulated Annealing, was implemented to solve the goal function represented by the network flow programming problem of the model. The findings from this research provide evidence that the implementation of Network Coding in WMNs, nearly doubles the level of the calculated aggregated throughput values. The magnitude of this throughput increase, can be further improved by additional manipulation of the network traffic dispersion. This is achieved by utilising link–state methods, rather than distance vector methods, to establish paths for the sessions of flow, present in the WMNs. / Thesis (M.Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2012.

Page generated in 0.0301 seconds