• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 10
  • 10
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The development of computational high-throughput approaches for screening metal-organic frameworks in adsorptive separation applications

Tao, Andi January 2019 (has links)
Chemical separation undoubtedly accounts for a large proportion of process industries' activities. In the past few decades, 10-15% of the world's energy consumed was resulted from separation process. Tremendous efforts have been made in separating the components of large quantities of chemical mixtures into pure or purer forms in most industrial chemists. In addition, industrial development and population growth would lead to a further increase in the global demand for energy in the future. This makes the effective and efficient energy separation process one of the most challenging tasks in engineering. Adsorptive separation using porous materials is widely used in industry today. In order for an adsorptive separation process to be efficient, the essential requirement is a selective adsorbent that possesses high surface area and preferentially adsorbs one component (or class of similar components). Metal-organic frameworks (MOFs) are promising materials for separation purposes as their diversity, due to their building block synthesis from metal clusters and organic linker, gives rise to a wide range of porous structures. Engineering of a separation process is a multi-disciplinary problem that requires a holistic approach. In particular, material selection for industrial applications in the field of MOFs is one of the most significant engineering challenges. The complexity of a screening exercise for adsorptive separations arises from the multitude of existing porous adsorbents including MOFs. There are more than 80,000 structures that have been synthesised so far, as well as the multivariate nature of that performance criteria that need to be considered when selecting or designing an optimal adsorbent for a separation process. However, it is infeasible to assess all the potential materials experimentally to identify the promising structure for a particular application. Recently, molecular simulation and mathematical modelling have seen an ever- growing contribution to the research field of MOFs. The development of these computational tools offers a unique platform for the characterisation, prediction and understanding of MOFs, complementary to experimental techniques. In the first part of this research, Monte Carlo molecular simulation and a number of advanced mathematical methods were used to investigate newly synthesised or not well-known MOFs. These computational techniques allowed not only to characterise materials with their textural properties, but also to predict and understand adsorption performances at the atomic level. Based on the insight gained from the molecular simulation, two computational high-throughput screening approaches were designed and assessed. A multi-scale approach has been proposed and used which combined high-throughput molecular simulation, data mining and advanced visualisation, process system modelling and experimental synthesis and testing. The focus here was on two main applications. On one hand, the challenging CO/N2 separation, which is critical for the petrochemical sector, where two molecules have very similar physical properties. On the other hand, the separation of chiral molecules. For CO/N2 separation, a database of 184 Cu- Cu paddle-wheels MOFs, which contains unsaturated metal centres as strong interaction sites, was extracted from CSD (Cambridge Structural Database) MOF subset for material screening. In the case of chiral separation, an efficient high-throughput approach based on calculation of Henry's constant was developed in this research. Owning to the nature of chirality, this separation of relevance to the pharmaceutical sector is crucially important. A database of 1407 homochiral MOFs was extracted, again, from CSD MOF subset for material screening of enantioselective adsorption. The results obtained in these computational high-throughput approaches allows the screening of interesting, existing structures, and would have a huge impact on making MOFs to be industrially interesting adsorbents as well as guiding the synthesis of these materials. From the many different possibilities, the ultimate interest of this work is in developing an integrated systematic study of the structure-adsorption performance relationship working with a limited library of candidate MOF structures in order to identify promising trends and materials for the specific applications mentioned above. In summary, the overall aim of this research was exploiting different computational techniques, developing novel high-throughput approaches in order to tackle important engineering challenges.
2

Master/worker parallel discrete event simulation

Park, Alfred John. January 2008 (has links)
Thesis (M. S.)--Computing, Georgia Institute of Technology, 2009. / Committee Chair: Fujimoto, Richard; Committee Member: Bader, David; Committee Member: Perumalla, Kalyan; Committee Member: Riley, George; Committee Member: Vuduc, Richard.
3

On the fragmentation of self-gravitating discs

Meru, Farzana Karim January 2010 (has links)
I have carried out three-dimensional numerical simulations of self-gravitating discs to determine under what circumstances they fragment to form bound clumps that may grow into giant planets. Through radiation hydrodynamical simulations using a Smoothed Particle Hydrodynamics code, I find that the disc opacity plays a vital role in determining whether a disc fragments. Specifically, opacities that are smaller than interstellar Rosseland mean values promote fragmentation (even at small radii, R < 25AU) since low opacities allow a disc to cool quickly. This may occur if a disc has a low metallicity or if grain growth has occurred. Given that the standard core accretion model is less likely to form planets in a low metallicity environment, I predict that gravitational instability is the dominant planet formation mechanism in a low metallicity environment. In addition, I find that the presence of stellar irradiation generally acts to inhibit fragmentation (since the discs can only cool to the temperature defined by stellar irradiation). However, fragmentation may occur if the irradiation is sufficiently weak that it allows the disc to attain a low Toomre stability parameter. With specific reference to the HR 8799 planetary system, I find that it is only possible for fragments to form in the radial range where the HR 8799 planets are located (approximately 24-68 AU) if the disc is massive. In such a high mass regime, mass transport occurs in the disc causing the surface mass density to alter. Therefore, fragmentation is not only affected by the disc temperature and cooling, but also by any restructuring due to the gravitational torques. The high mass discs also pose a problem for the formation of this system because the protoplanets accrete from the disc and end up with masses greater than those inferred from observation and thus, the growth of planets would need to be inhibited. In addition, I find that further subsequent fragmentation at small radii also takes place. By way of analytical arguments in combination with hydrodynamical simulations using a parameterised cooling method, I explore the fragmentation criteria which in the past, has placed emphasis on the cooling timescale in units of the orbital timescale, beta. I find that at a given radius the surface mass density (i.e. disc mass and profile) and star mass also play a crucial role in determining whether a disc fragments or not as well as where in the disc fragments form. I find that for shallow surface mass density profiles (p<2, where the surface mass density is proportional to R^{-p}), fragments form in the outer regions of the disc. However for steep surface mass density profiles (p is greater than or similar to 2), fragments form in the inner regions of a disc. In addition, I find that the critical value of the cooling timescale in units of the orbital timescale, beta_crit, found in previous simulations is only applicable to certain disc surface mass density profiles and for particular disc radii and is not a general rule for all discs. I obtain an empirical fragmentation criteria between the cooling timescale in units of the orbital timescale, beta, the surface mass density, the star mass and the radius. Finally, I carry out crucial resolution testing by performing the highest resolution disc simulations to date. My results cast some serious doubts on previous conclusions concerning fragmentation of self-gravitating discs.
4

Incorporating inter-sample variability into cardiac electrophysiology simulations

Walmsley, John January 2014 (has links)
Sudden cardiac death kills 5-10 people per 10,000 population in Europe and the US each year. Individual propensity to arrhythmia and sudden cardiac death is typically assessed through clinical biomarkers. Variability in these biomarkers is a major challenge for risk stratification. Variability is observed at a wide range of spatio-temporal scales within the heart, from temporal fluctuations in ion channel behaviour, to inter-cell and inter-regional differences in ion channel expression, to structural differences between hearts. The extent to which variability manifests between spatial and temporal scales remains unclear but has a potentially crucial role in determining susceptibility to arrhythmia. In this dissertation we present a multi-scale study of the causes and consequences of variability in electrophysiology. At a sub-cellular level we demonstrate that, taking into account inter-individual variability in ion channel conductance, mRNA expression levels in failing human hearts predict the electrophysiological remodelling observed experimentally. On a tissue scale, we advocate the use of phenomenological models where information on subcellular processes is unavailable. We introduce a modification to a phenomenological model to capture beat-to-beat variability in action potential repolarisation recorded from four individual guinea pig myocytes. We demonstrate that, whilst temporal variability is dramatically reduced by inter-cell coupling, differences in their mean action potential duration may become apparent at a tissue level. The ventricular myocardium has a heterogeneous structure not captured by the simplified representation of conduction used above. In our final case study, we challenge a model of conduction by directly comparing simulations to optical mapping recordings of ventricular activation from failing and non-failing human hearts. We observe that good fits to experimental data are obtained only when endocardially bound structures are not in view, suggesting a role in conduction for these structures that are often ignored in cardiac simulations. Finally, we present future directions for the work presented. We make the case for reporting of inter-sample variability in experimental results and conclude that whilst variability may not always manifest across scales, its impact should be considered in both theoretical and experimental studies.
5

Protein folding, stability and recognition /

Duan, Jianxin, January 2004 (has links)
Diss. (sammanfattning) Stockholm : Karol. inst., 2004. / Härtill 4 uppsatser.
6

Computational models of intracellular signalling and synaptic plasticity induction in the cerebellum

Matos Pinto, Thiago January 2013 (has links)
Many molecules and the complex interactions between them underlie plasticity in the cerebellum. However, the exact relationship between cerebellar plasticity and the different signalling cascades remains unclear. Calcium-calmodulin dependent protein kinase II (CaMKII) regulates many forms of synaptic plasticity, but very little is known about its function during plasticity induction in the cerebellum. The aim of this thesis is to contribute to a better understanding of the molecular mechanisms that regulate the induction of synaptic plasticity in cerebellar Purkinje cells (PCs). The focus of the thesis is to investigate the role of CaMKII isoforms in the bidirectional modulation of plasticity induction at parallel fibre (PF)-PC synapses. For this investigation, computational models that represent the CaMKII activation and the signalling network that mediates plasticity induction at these synapses were constructed. The model of CaMKII activation by calcium-calmodulin developed by Dupont et al (2003) replicates the experiments by De Koninck and Schulman (1998). Both theoretical and experimental studies have argued that the phosphorylation and activation of CaMKII depends on the frequency of calcium oscillations. Using a simplified version of the Dupont model, it was demonstrated that the CaMKII phosphorylation is mostly determined by the average calcium-calmodulin concentration, and therefore depends only indirectly on the actual frequency of calcium oscillations. I have shown that a pulsed application of calcium-calmodulin is, in fact, not required at all. These findings strongly indicate that the activation of CaMKII depends on the average calcium-calmodulin concentration and not on the oscillation frequency per se as asserted in those studies. This thesis also presents the first model of AMPA receptor phosphorylation that simulates the induction of long-term depression (LTD) and potentiation (LTP) at the PF-PC synapse. The results of computer simulations of a simple mathematical model suggest that the balance of CaMKII-mediated phosphorylation and protein phosphatase 2B (PP2B)-mediated dephosphorylation of AMPA receptors determines whether LTD or LTP occurs in cerebellar PCs. This model replicates the experimental observations by Van Woerden et al (2009) that indicate that CaMKII controls the direction of plasticity at PF-PC synapses. My computer simulations support Van Woerden et al’s original suggestion that filamentous actin binding can enable CaMKII to regulate bidirectional plasticity at these synapses. The computational models of intracellular signalling constructed in this thesis advance the understanding of the mechanisms of synaptic plasticity induction in the cerebellum. These simple models are significant tools for future research by the scientific community.
7

Método de otimização assitido para comparação entre poços convencionais e inteligentes considerando incertezas / Assited optimization method for comparison between conventional and intelligent wells considering uncertainties

Pinto, Marcio Augusto Sampaio, 1977- 11 April 2013 (has links)
Orientador: Denis José Schiozer / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de Geociências / Made available in DSpace on 2018-08-24T00:34:10Z (GMT). No. of bitstreams: 1 Pinto_MarcioAugustoSampaio_D.pdf: 5097853 bytes, checksum: bc8b7f6300987de2beb9a57c26ad806a (MD5) Previous issue date: 2013 / Resumo: Neste trabalho, um método de otimização assistido é proposto para estabelecer uma comparação refinada entre poços convencionais e inteligentes, considerando incertezas geológicas e econômicas. Para isto é apresentada uma metodologia dividida em quatro etapas: (1) representação e operação dos poços no simulador; (2) otimização das camadas/ou blocos completados nos poços convencionais e do número e posicionamento das válvulas nos poços inteligentes; (3) otimização da operação dos poços convencionais e das válvulas nos poços inteligentes, através de um método híbrido de otimização, composto pelo algoritmo genético rápido, para realizar a otimização global, e pelo método de gradiente conjugado, para realizar a otimização local; (4) uma análise de decisão considerando os resultados de todos os cenários geológicos e econômicos. Esta metodologia foi validada em modelos de reservatórios mais simples e com configuração de poços verticais do tipo five-spot, para em seguida ser aplicada em modelos de reservatórios mais complexos, com quatro poços produtores e quatro injetores, todos horizontais. Os resultados mostram uma clara diferença ao aplicar a metodologia proposta para estabelecer a comparação entre os dois tipos de poços. Apresenta também a comparação entre os resultados dos poços inteligentes com três tipos de controle, o reativo e mais duas formas de controle proativo. Os resultados mostram, para os casos utilizados nesta tese, uma ampla vantagem em se utilizar pelo menos uma das formas de controle proativo, ao aumentar a recuperação de óleo e VPL, reduzindo a produção e injeção de água na maioria dos casos / Abstract: In this work, an assisted optimization method is proposed to establish a refined comparison between conventional and intelligent wells, considering geological and economic uncertainties. For this, it is presented a methodology divided into four steps: (1) representation and operation of wells in the simulator, (2) optimization of the layers /blocks with completion in conventional wells and the number and placement of the valves in intelligent wells; (3) optimization of the operation of the conventional and valves in the intelligent, through a hybrid optimization method, comprising by fast genetic algorithm, to perform global optimization, and the conjugate gradient method, to perform local optimization; (4) decision analysis considering the results of all geological and economic scenarios. This method was validated in simple reservoir models and configuration of vertical wells with five-spot type, and then applied to a more complex reservoir model, with four producers and four injectors wells, all horizontal. The results show a clear difference in applying the proposed methodology to establish a comparison between the two types of wells. It also shows the comparison between the results of intelligent wells with three types of control, reactive and two ways of proactive control. The results show, for the cases used in this work, a large advantage to use intelligent wells with at least one form of proactive control, to enhance oil recovery and NPV, reducing water production and injection in most cases / Doutorado / Reservatórios e Gestão / Doutor em Ciências e Engenharia de Petróleo
8

Economic scheduling in Grid computing using Tender models

Bsoul, Mohammad January 2007 (has links)
Economic scheduling needs to be considered for Grid computing environment, because it gives an incentive for resource providers to supply their resources. Moreover, it enforces efficient use of resources, because the users have to pay for their use. Tendering is a suitable model for Grid scheduling because users start the negotiations for finding suitable resources for executing their jobs. Furthermore, the users specify their job requirements with their requests and therefore the resources reply with bids that are based on the cost of taking on the job and the availability of their processors. In this thesis, a framework for economic Grid scheduling using tendering is proposed. The framework entities such as users, brokers and resources employ tender/contract-net model to negotiate the prices and deadlines. The brokers' role is acting on behalf of users. During the negotiations, the entities aim to maximise their performance which is measured by a number of metrics. In order to evaluate the entities' performance under different scenarios, a Java- based simulator, called MICOSim, supporting event-driven simulation of economic Grid scheduling is presented. MICOSim can perform a simulation of more than one hundred entities faster than real time. It is concluded from the evaluation that users who are interested in increasing the job success rate and paying less for executing their jobs have to consider received prices to select the most appropriate bids, while users who are interested in improving the job average satisfaction rate have to consider either received completion time or both price and completion time to select the most suitable bids when the submission of jobs is static. The best broker strategy is the one that doesn't take into account meeting the job deadlines in the bids it sends to job owners. Finally, the resource strategy that considers the price to determine if to reply to a request or not is superior to other resource strategies. The only exception is employing this strategy with price that is too low. However, there is a tiny difference between the performances of different user strategies in dynamic submission. It is also concluded from the evaluation that broker strategies have the best performance when the revenue they target from the users is reasonable. Thus, the broker's aim has to be receiving reasonable revenue (neither too low nor too high) from acting on behalf of users. It is observed from the results that the strategy performance is influenced by the behaviour of other entities such as the submission time of user jobs. Finally, it is observed that the characteristics of entities have an effect on the performance of strategies. For example, the two user strategies that consider the received completion time and both price and completion time to determine if to accept a broker bid have similar performance, because of the existence of resources with various prices from cheap to expensive and existence of resources which don't care about the price paid for the execution. So, the price threshold doesn't have a large effect on the performance.
9

Computational studies of biomolecules

Chen, Sih-Yu January 2017 (has links)
In modern drug discovery, lead discovery is a term used to describe the overall process from hit discovery to lead optimisation, with the goal being to identify drug candidates. This can be greatly facilitated by the use of computer-aided (or in silico) techniques, which can reduce experimentation costs along the drug discovery pipeline. The range of relevant techniques include: molecular modelling to obtain structural information, molecular dynamics (which will be covered in Chapter 2), activity or property prediction by means of quantitative structure activity/property models (QSAR/QSPR), where machine learning techniques are introduced (to be covered in Chapter 1) and quantum chemistry, used to explain chemical structure, properties and reactivity. This thesis is divided into five parts. Chapter 1 starts with an outline of the early stages of drug discovery; introducing the use of virtual screening for hit and lead identification. Such approaches may roughly be divided into structure-based (docking, by far the most often referred to) and ligand-based, leading to a set of promising compounds for further evaluation. Then, the use of machine learning techniques, the issue of which will be frequently encountered, followed by a brief review of the "no free lunch" theorem, that describes how no learning algorithm can perform optimally on all problems. This implies that validation of predictive accuracy in multiple models is required for optimal model selection. As the dimensionality of the feature space increases, the issue referred to as "the curse of dimensionality" becomes a challenge. In closing, the last sections focus on supervised classification Random Forests. Computer-based analyses are an integral part of drug discovery. Chapter 2 begins with discussions of molecular docking; including strategies incorporating protein flexibility at global and local levels, then a specific focus on an automated docking program – AutoDock, which uses a Lamarckian genetic algorithm and empirical binding free energy function. In the second part of the chapter, a brief introduction of molecular dynamics will be given. Chapter 3 describes how we constructed a dataset of known binding sites with co-crystallised ligands, used to extract features characterising the structural and chemical properties of the binding pocket. A machine learning algorithm was adopted to create a three-way predictive model, capable of assigning each case to one of the classes (regular, orthosteric and allosteric) for in silico selection of allosteric sites, and by a feature selection algorithm (Gini) to rationalize the selection of important descriptors, most influential in classifying the binding pockets. In Chapter 4, we made use of structure-based virtual screening, and we focused on docking a fluorescent sensor to a non-canonical DNA quadruplex structure. The preferred binding poses, binding site, and the interactions are scored, followed by application of an ONIOM model to re-score the binding poses of some DNA-ligand complexes, focusing on only the best pose (with the lowest binding energy) from AutoDock. The use of a pre-generated conformational ensemble using MD to account for the receptors' flexibility followed by docking methods are termed “relaxed complex” schemes. Chapter 5 concerns the BLUF domain photocycle. We will be focused on conformational preference of some critical residues in the flavin binding site after a charge redistribution has been introduced. This work provides another activation model to address controversial features of the BLUF domain.
10

An efficient and robust simulator for wear of total knee replacements

Burchardt, Ansgar, Abicht, Christian, Sander, Oliver 28 November 2022 (has links)
Wear on total knee replacements is an important criterion for their performance characteristics. Numerical simulations of such wear have seen increasing attention over the last years. They have the potential to be much faster and less expensive than the in vitro tests in use today. While it is unlikely that in silico tests will replace actual physical tests in the foreseeable future, a judicious combination of both approaches can help making both implant design and pre-clinical testing quicker and more cost-effective. The challenge today for the design of simulation methods is to obtain results that convey quantitative information and to do so quickly and reliably. This involves the choice of mathematical models as well as the numerical tools used to solve them. The correctness of the choice can only be validated by comparing with experimental results. In this article, we present finite element simulations of the wear in total knee replacements during the gait cycle standardized in the ISO 14243-1 document, used for compliance testing in several countries. As the ISO 14243-1 standard is precisely defined and publicly available, it can serve as an excellent benchmark for comparison of wear simulation methods. We use comparatively simple wear and material models, but we solve them using a new wear algorithm that combines extrapolation of the geometry changes with a contact algorithm based on nonsmooth multigrid ideas. The contact algorithm works without Lagrange multipliers and penalty parameters, achieving unparalleled stability and efficiency. We compare our simulation results with the experimental data from physical tests using two different actual total knee replacements. Even though the model is simple, we can predict the total mass loss due to wear after 5-million gait cycles, and we observe a good match between the wear patterns seen in experiments and our simulation results. When compared with a state-of-the-art penalty-based solver for the same model, we measure a roughly fivefold increase of execution speed.

Page generated in 0.1704 seconds