• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 24
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 106
  • 15
  • 14
  • 14
  • 14
  • 11
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

On the analysis of remd protein structure prediction simulations for reducing volume of analytical data

Macedo, Rafael Cauduro Oliveira 30 August 2017 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-09-03T14:00:58Z No. of bitstreams: 1 RAFAEL CAUDURO OLIVEIRA MACEDO_DIS.pdf: 6178948 bytes, checksum: 6ed3599e31f122e78b11b322a8c0ac06 (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-09-04T12:17:04Z (GMT) No. of bitstreams: 1 RAFAEL CAUDURO OLIVEIRA MACEDO_DIS.pdf: 6178948 bytes, checksum: 6ed3599e31f122e78b11b322a8c0ac06 (MD5) / Made available in DSpace on 2018-09-04T12:47:15Z (GMT). No. of bitstreams: 1 RAFAEL CAUDURO OLIVEIRA MACEDO_DIS.pdf: 6178948 bytes, checksum: 6ed3599e31f122e78b11b322a8c0ac06 (MD5) Previous issue date: 2017-08-30 / Prote?nas executam um papel vital em todos os seres vivos, mediando uma s?rie de processos necess?rios para a vida. Apesar de existirem maneiras de determinar a composi??o dessas mol?culas, ainda falta-nos conhecimentos suficiente para determinar de uma maneira r?pida e barata a sua estrutura 3D, que desempenha um papel importante na suas fun??es. Um dos principais m?todos computacionais aplicados ao estudo das prote?nas e o seu processo de enovelamento, o qual determina a sua estrutura, ? Din?mica Molecular. Um aprimoramento deste m?todo, conhecido como Replica Exchange Molecular Dynamics (ou REMD), ? capaz de produzir resultados muito melhores, com o rev?s de significativamente aumentar o seu custo computacional e gerar um volume muito maior de dados. Esta disserta??o apresenta um novo m?todo de otimiza??o deste m?todo, intitulado Filtragem de Dados Anal?ticos, que tem como objetivo otimizar a an?lise p?s-simula??o filtrando as estruturas preditas insatisfat?rias atrav?s do uso de m?tricas de qualidade absolutas. A metodologia proposta tem o potencial de operar em conjunto com outras abordagens de otimiza??o e tamb?m cobrir uma ?rea ainda n?o abordada por elas. Adiante, a ferramenta SnapFi ? apresentada, a qual foi designada especialmente para o prop?sito de filtrar estruturas preditas insatisfat?rias e ainda operar em conjunto com as diferentes abordagens de otimiza??o do m?todo REMD. Um estudo foi ent?o conduzido sobre um conjunto teste de simula??es REMD de predi??o de estruturas de prote?nas afim de elucidar uma s?ries de hip?teses formuladas sobre o impacto das diferentes temperaturas na qualidade final do conjunto de estruturas preditas do processo REMD, a efici?ncia das diferentes m?tricas de qualidade absolutas e uma poss?vel configura??o de filtragem que utiliza essas m?tricas. Foi observado que as temperaturas mais altas do m?todo REMD para predi??o de estruturas de prote?nas podem ser descartadas de forma segura da an?lise posterior ao seu t?rmino e tamb?m que as m?tricas de qualidade absolutas possuem uma alta vari?ncia (em termos de qualidade) entre diferentes simula??es de predi??es de estruturas de prote?nas. Al?m disso, foi observado que diferentes configura??es de filtragem que utilize tais m?tricas carrega consigo esta vari?ncia. / Proteins perform a vital role in all living beings, mediating a series of processes necessary to life. Although we have ways to determine the composition of such molecules, we lack sufficient knowledge regarding the determination of their 3D structure in a cheap and fast manner, which plays an important role in their functions. One of the main computational methods applied to the study of proteins and their folding process, which determine its structure, is Molecular Dynamics. An enhancement of this method, known as Replica-Exchange Molecular Dynamics (or REMD) is capable of producing much better results, at the expense of a significant increase in computational costs and volume of raw data generated. This dissertation presents a novel optimization for this method, titled Analytical Data Filtering, which aims to optimize post-simulation analysis by filtering unsatisfactory predicted structures via the use of different absolute quality metrics. The proposed methodology has the potential of working together with other optimization approaches as well as covering an area still untouched at large by them to the best of the author knowledge. Further on, the SnapFi tool is presented, a tool designed specially for the purpose of filtering unsatisfactory structure predictions and also being able to work with the different optimization approaches of the Replica-Exchange Molecular Dynamics method. A study was then conducted on a test dataset of REMD protein structure prediction simulations aiming to elucidate a series of formulated hypothesis regarding the impact of the different temperatures of the REMD process in the final quality of the predicted structures, the efficiency of the different absolute quality metrics and a possible filtering configuration that take advantage of such metrics. It was observed that high temperatures may be safely discarded from post-simulation analysis of REMD protein structure prediction simulations, that absolute quality metrics posses a high variance of efficiency (regarding quality terms) between different protein structure prediction simulations and that different filtering configurations composed of such quality metrics carry on this inconvenient variance.
62

Using Hash Trees for Database Schema Inconsistency Detection

Spik, Charlotta January 2019 (has links)
For this work, two algorithms have been developed to improve the performance of the inconsistency detection by using Merkle trees. The first builds a hash tree from a database schema version, and the second compares two hash trees to find where changes have occurred. The results of performance testing done on the hash tree approach compared to the current approach used by Cisco where all data in the schema is traversed, shows that the hash tree algorithm for inconsistency detection performs significantly better than the complete traversal algorithm in all cases tested, with the exception of when all nodes have changed in the tree. The factor of improvement is directly related to the number of nodes that have to be traversed for the hash tree, which in turn depends on the number of changes done between versions and the positioning in the schema of the nodes that have changed. The real-life example scenarios used for performance testing show that on average, the hash tree algorithm only needs to traverse 1,5% of the number of nodes that the complete traversal algorithm used by Cisco does, and on average gives a 200 times improvement in performance. Even in the worst real-life case used for testing, the hash tree algorithm performed five times better than the complete traversal algorithm. / I detta arbete har två algoritmer utvecklats for att förbättra prestandan på processen att hitta skillnader mellan schemana genom att använda Merkle träd. Den första bygger ett hashträd från schemaversionen, och den andra jämför två hashträd för att hitta var förändringar har skett. Resultaten från prestandautvärderingen som gjorts på hashträdalgoritmen jämfört med nuvarande algoritm som används på Cisco där all data i schemat traverseras, visar att hashträdalgoritmen presterar signifikant bättre än algoritmen som traverserar all data i alla fall som testats, förutom då alla noder har ändrats i trädet. Förbättringsfaktorn är direkt kopplad till antalet noder som behöver traverseras för hashträdalgoritmen, vilket i sin tur beror på antalet förändringar som skett mellan versionerna och positioneringen i schemat av de noder som har förändrats. De exempelscenarior som har tagits från riktiga uppdateringar som har skett för existerande scheman visar att i genomsnitt behöver hashträdalgoritmen bara traversera 1,5% av noderna som den nuvarande algoritmen som används av Cisco måste traversera, och hashträdalgoritmen ger i genomsnitt en 200 gånger prestandaförbättring. Även i det värsta fallet för dessa uppdateringar tagna från verkliga scenarier presterade hashträdalgoritmen fem gånger bättre än algoritmen som traverserar all data i schemat.
63

Scalable download protocols

Carlsson, Niklas 15 December 2006
Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.<p>Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.<p>In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only local (rather than global) request information.<p>Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence.
64

The role of 3D printing in biological anthropology

Allard, Travis T. 14 September 2006 (has links)
The following work explores the role of 3D printing in biological anthropology. A case study approach is used to provide an understanding of two different applications for 3D printing and to identify a potential methodology for creating 3D models. Case study one looks at the application of 3D printing to reconstruction projects using a flowerpot to test the reconstruction methodology. The second case study uses both laser surface and CT scanning to create a replica of a human skeleton. The two methods of data acquisition are evaluated for advantages and limitations in creating the virtual model. This work shows that there is a role for 3D printing in biological anthropology, but that data acquisition and processing issues are the most significant limiting factors in producing skeletal replicas. / October 2006
65

Scalable download protocols

Carlsson, Niklas 15 December 2006 (has links)
Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.<p>Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.<p>In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only local (rather than global) request information.<p>Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence.
66

Conversion of 3-D nanostructured biosilica templates into non-oxide replicas

Bao, Zhihao 08 January 2008 (has links)
Diatoms possess characteristics such as abundance, diversity, and high reproductivity, which make their nano-structured frustules (diatom frustules) attractive for a wide range of applications. To overcome the limitation of their silica based frustule composition, diatom frustules have been converted into a variety of materials including silicon, silicon carbide, silver, gold, palladium and carbon in the present study. The compositions and the extent of shape preservation of the replicas are examined and evaluated with different characterization methods such as X-ray diffraction, SEM, TEM and FTIR analyses. These replicas still retained the complex 3D structures and nano-scaled features of the starting diatom frustules. Some properties and possible applications of converted materials are explored and the kinetics and thermodynamics related to the successful replications (conversions) are also studied and discussed.
67

Strategies for cellulose fiber modification

Persson, Per January 2004 (has links)
<p>This thesis describes strategies for and examples ofcellulose fiber modification.The ability of an engineered biocatalyst, acellulose-binding module fused to the<i>Candida antarctica</i>lipase B, to catalyze ring-openingpolymerization of e-caprolactone in close proximity tocellulose fiber surfaces was explored. The water content in thesystem was found to regulate the polymer molecular weight,whereas the temperature primarily influenced the reaction rate.The hydrophobicity of the cellulose sample increased as aresult of the presence of surface-deposited polyester.</p><p>A two-step enzymatic method was also investigated. Here,Candida antarctica lipase B catalyzed the acylation ofxyloglucan oligosaccharides.The modified carbohydrates werethen incorporated into longer xyloglucan molecules through theaction of a xyloglucan endotransglycosylase. The modifiedxyloglucan chains were finally deposited on a cellulosesubstrate.</p><p>The action of<i>Candida antarctica</i>lipase B was further investigated inthe copolymerization of e-caprolactone and D,L-lactide.Copolymerizations with different e-caprolactone-to-D,L-lactideratios were carried out. Initially, the polymerization wasslowed by the presence of D,L-lactide. During this stage,D,L-lactide was consumed more rapidly than ε-caprolactoneand the incorporation occurred dimer-wise with regard to thelactic acid units.</p><p>Morphological studies on wood fibers were conducted using asol-gel mineralization method. The replicas produced werestudied, without additional sample preparation, by electronmicroscopy and nitrogen adsorption. Information concerning thestructure and accessibility of the porous fiber wall wasobtained. Studies of never-dried kraft pulp casts revealedmicro-cavities and cellulose fibrils with mean widths of 4.7(±2) and 3.6 (±1) nm, respectively.</p><p>Finally, cationic catalysis by simple carboxylic acids wasstudied. L-Lactic acid was shown to catalyze the ring-openingpolymerization of ε-caprolactone in bulk at 120 °C.The reaction was initiated with methylß-D-glucopyranoside, sucrose or raffinose, which resultedin carbohydrate-functionalized polyesters. The regioselectivityof the acylation was well in agreement with the correspondinglipase-catalyzed reaction. The polymerization was alsoinitiated with a hexahydroxy-functional compound, whichresulted in a dendrimer-like star polymer. The L-lactic acidwas readily recycled, which made consecutive reactions usingthe same catalyst possible.</p><p><b>Keywords:</b><i>Candida antarctica</i>lipase B, cationic catalysis,cellulose-binding module, dendrimer, enzymatic polymerization,fiber modification, silica-cast replica, sol-gelmineralization, organocatalysis, xyloglucanendotransglycosylase</p>
68

Self-tuning dynamic voltage scaling techniques for processor design

Park, Junyoung 30 January 2014 (has links)
The Dynamic Voltage Scaling (DVS) technique has proven to be ideal in regard to balancing performance and energy consumption of a processor since it allows for almost cubic reduction in dynamic power consumption with only a nearly linear reduction in performance. Due to its virtue, the DVS technique has been used for the two main purposes: energy-saving and temperature reduction. However, recently, a Dynamic Voltage Scaled (DVS) processor has lost its appeal as process technology advances due to the increasing Process, Voltage and Temperature (PVT) variations. In order to make a processor tolerant to the increasing uncertainties caused by such variations, processor designers have used more timing margins. Therefore, in a modern-day DVS processor, reducing voltage requires comparatively more performance degradation when compared to its predecessors. For this reason, this technique has a lot of room for improvement for the following facts. (a) From an energy-saving viewpoint, excessive margins to account for the worst-case operating conditions in a DVS processor can be exploited because they are rarely used during run-time. (b) From a temperature reduction point of view, accurate prediction of the optimal performance point in a DVS processor can increase its performance. In this dissertation, we propose four performance improvement ideas from two different uses of the DVS technique. In regard to the DVS technique for energy-saving, in this dissertation, we introduce three different types of margin reduction (or margin decision) techniques. First, we introduce a new indirect Critical Path Monitor (CPM) to make a conventional DVS processor adaptive to its given environment. Our CPM is composed of several Slope Generators, each of which generates similar voltage scaling slopes to those of potential critical paths under a process corner. Each CPR in the Slope Generator tracks the delays of potential critical paths with minimum difference at any condition in a certain voltage range. The CPRs in the same Slope Generator are connected to a multiplexer and one of them is selected according to a current voltage level. Calibration steps are done by using conventional speed-binning process with clock duty-cycle modulation. Second, we propose a new direct CPM that is based on a non-speculative pre-sampling technique. A processor that is based on this technique predicts timing errors in the actual critical paths and undertakes preventive steps in order to avoid the timing errors in the event that the timing margins fall below a critical level. Unlike direct CPM that uses circuit-level speculative operation, although the shadow latch can have timing error, the main Flip-Flop (FF) of our direct CPM never fails, guaranteeing always-correct operation of the processor. Our non-speculative CPM is more suitable for high-performance processor designs than the speculative CPM in that it does not require original design modification and has lower power overhead. Third, we introduce a novel method that determines the most accurate margin that is based on the conventional binning process. By reusing the hold-scan FFs in a processor, we reduce design complexity, minimize hardware overhead and increase error detecting accuracy. Running workloads on the processor with Stop-Go clock gating allows us to find which paths have timing errors during the speed binning steps at various, fixed temperature levels. From this timing error information, we can determine the different maximum frequencies for diverse operating conditions. This method has high degree of accuracy without having a large overhead. In regard to the DVS technique for temperature reduction, we introduce a run-time temperature monitoring scheme that predicts the optimal performance point in a DVS processor with high accuracy. In order to increase the accuracy of the optimal performance point prediction, this technique monitors the thermal stress of a processor during run-time and uses several Look-Up Tables (LUTs) for different process corners. The monitoring is performed while applying Stop-Go clock gating, and the average EN value is calculated at the end of the monitoring time. Prediction of the optimal performance point is made using the average EN value and one of the LUTs that corresponds to the process corner under which the processor was manufactured. The simulation results show that we can achieve maximum processor performance while keeping the processor temperature within threshold temperature. / text
69

The role of 3D printing in biological anthropology

Allard, Travis T. 14 September 2006 (has links)
The following work explores the role of 3D printing in biological anthropology. A case study approach is used to provide an understanding of two different applications for 3D printing and to identify a potential methodology for creating 3D models. Case study one looks at the application of 3D printing to reconstruction projects using a flowerpot to test the reconstruction methodology. The second case study uses both laser surface and CT scanning to create a replica of a human skeleton. The two methods of data acquisition are evaluated for advantages and limitations in creating the virtual model. This work shows that there is a role for 3D printing in biological anthropology, but that data acquisition and processing issues are the most significant limiting factors in producing skeletal replicas.
70

The role of 3D printing in biological anthropology

Allard, Travis T. 14 September 2006 (has links)
The following work explores the role of 3D printing in biological anthropology. A case study approach is used to provide an understanding of two different applications for 3D printing and to identify a potential methodology for creating 3D models. Case study one looks at the application of 3D printing to reconstruction projects using a flowerpot to test the reconstruction methodology. The second case study uses both laser surface and CT scanning to create a replica of a human skeleton. The two methods of data acquisition are evaluated for advantages and limitations in creating the virtual model. This work shows that there is a role for 3D printing in biological anthropology, but that data acquisition and processing issues are the most significant limiting factors in producing skeletal replicas.

Page generated in 0.0351 seconds