Spelling suggestions: "subject:"computation."" "subject:"omputation.""
191 |
Posteriori Error Analysis for the p-version of the Finite Element MethodYang, Xiaofeng 16 January 2014 (has links)
In the framework of the Jacobi-weighted Sobolev space, we design the a-posterior error estimators and error indicators associated with residuals and jumps of normal derivatives on internal edges with appropriate Jacobi weights for the p-version of the finite element method. With the help of quasi Jacobi projection operators, the upper bounds and the lower bounds of indicators and estimators are analyzed, which shows that such a-posteriori error estimation is quasi optimal. The indicators and estimators are computed for some model problems and programmed in C++. The numerical results show the reliability of our indicators and estimators.
|
192 |
Simulations of Surfactant Driven Thin Film FlowKumar, Shreyas 01 January 2013 (has links)
This thesis is intended to fulfill the requirements of the Math and Physics departments at Harvey Mudd College. We begin with a brief introduction to the study of surfactant dynamics followed by some background on the experimental framework our work is related to. We then go through a derivation of the model we use, and explore in depth the nature of the Equation of State (EoS), the relationship between the surface tension on a fluid and the surfactant concentration. We consider the effect of using an empirical equation of state on the results of the simulations and compare the new results against the results produced using a multilayer (EoS) as well as experimental observations. We find that the empirical EoS leads to two new behaviors - preserving of large gradients of surfactant concentration and the occurrence of dynamics in distinct regimes. These behaviors suggest that the empirical EoS improves the agreement of the model’s prediction with experiment.
|
193 |
Decoherence, Measurement and Quantum Computing in Ion TrapsSchneider, Sara Unknown Date (has links)
This thesis is concerned with various aspects of ion traps and their use as a quantum simulation and computation device. In its first part we investigate various sources of noise and decoherence in ion traps. As quantum information is very fragile, a detailed knowledge of noise and decoherence sources in a quantum computation device is essential. In the special case of an ion trap quantum computer we investigate the effects of intensity and phase noise in the laser, which is used to perform the gate operations. We then look at other sources of noise which are present without a laser being switched on. These are fluctuations in the trapping frequency caused by noise in the electric potentials applied to the trap and fluctuating electrical fields which will cause heating of the centre-of-mass vibrational state of the ions in the trap. For the case of fluctuating electrical fields we estimate the effect on a quantum gate operation. We then propose a scheme for performing quantum gates without having the ions cooled down to their motional ground state. The second part deals with various aspects of the use of ion traps as a device for quantum computation. We start with the use of ionic qubits as a measurement device for the centre-of-mass vibrational mode and investigate in detail the effect these measurements will have on the vibrational mode. If one wants to use quantum computation devices as systems to simulate quantum mechanics, it is of interest to know how to simulate say a k-level system with N qubits. We investigate the easiest case of this wider problem and look at how to simulate a three-level system (a so called trit) with two qubits in an ion trap quantum computer. We show how to get and measure a SU (3) geometric phase with this toy model. Finally we investigate how to simulate collective angular momentum models with a string of qubits in an ion trap. We assume that the ionic qubits are coupled to a thermal reservoir and derive a master equation for this case. We investigate the semiclassical limit of this master equation and, in the case for two qubits in the trap, determine the entanglement of the steady state. We also outline a way to find the steady state for the master equation using coherence vectors.
|
194 |
A novel framework for protein structure predictionBondugula, Rajkumar, January 2007 (has links)
Thesis (Ph.D.)--University of Missouri-Columbia, 2007. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on March 23, 2009) Vita. Includes bibliographical references.
|
195 |
Input to output transfer in neuronsPelko, Miha January 2016 (has links)
Computational modelling is playing an increasing role in neuroscience research by providing not only theoretical frameworks for describing the activity of the brain and the nervous system, but also by providing a set of tools and techniques for better understanding data obtained using various recording techniques. The focus of this thesis was on the latter - using computational modelling to assist with analyzing measurement results and the underlying mechanisms behind them. The first study described in this thesis is an example of the use of a computational model in the case of intracellular in vivo recordings. Intracellular recordings of neurons in vivo are becoming routine, yielding insights into the rich sub-threshold neural dynamics and the integration of information by neurons under realistic situations. In particular, these methods have been used to estimate the global excitatory and inhibitory synaptic conductances experienced by the soma. I first present a method to estimate the effective somatic excitatory and inhibitory conductances as well as their rate and event size from the intracellular in vivo recordings. The method was applied to intracellular recordings from primary motor cortex of awake behaving mice. Next, I studied how dendritic filtering leads to misestimation of the global excitatory and inhibitory conductances. Using analytical treatment of a simplified model and numerical simulations of a detailed compartmental model, I show how much both the mean, as well as the variation of the synaptic conductances are underestimated by the methods based on recordings at the soma. The influence of the synaptic distance from the soma on the estimation for both excitatory as well as inhibitory inputs for different realistic neuronal morphologies is discussed. The last study was an attempt to classify the synaptic location region based on the measurements of the excitatory postsynaptic potential at two different locations on the dendritic tree. The measurements were obtained from the in vitro intercellular recordings in slices of the somatosensory cortex of rats when exposed to glutamate uncaging stimulation. The models were used to train the classifier and to demonstrate the extent to which the automatic classification agrees with manual classification performed by the experimenter.
|
196 |
SELEÇÃO de Modelos e Estimação de Parâmetros No Tratamento Quimioterápico de Tumores Via Inferência BayesianaMATA, A. M. M. 21 July 2017 (has links)
Made available in DSpace on 2018-08-02T00:03:01Z (GMT). No. of bitstreams: 1
tese_11469_ADRIANA MACHADO MALAFAIA DA MATA.pdf: 525854 bytes, checksum: 6cb593fee29b00aa8d38d9498f996ea0 (MD5)
Previous issue date: 2017-07-21 / O câncer é uma doença decorrente do crescimento desordenado de células. Comumente, a quimioterapia antineoplásica é utilizada no tratamento dos cânceres mais comuns. Nesse contexto, as pesquisas têm se voltado para modelos matemáticos que descrevem o crescimento de células tumorais com a ação de um fármaco quimioterápico. Diante de uma variedade de modelos existentes na literatura para tal fim, um método para selecionar o modelo mais adequado faz-se necessário. Esta dissertação estuda modelos matemáticos de tratamento de tumores e aplica Approximate Bayesian Computation (ABC) para seleção do modelo que melhor representa os dados observados. O algoritmo ABC utilizado foi determinístico, priorizando a seleção do modelo. Ao modelo selecionado, foi aplicado o filtro de partículas SIR que permitiu
aprimorar as estimativas de parâmetros. Foram estudados modelos de crescimento tumoral via equações diferenciais ordinárias e os parâmetros foram assumidos como constantes. Os modelos foram estruturados a partir de farmacocinética Bicompartimental, que permite o estudo
de drogas antineoplásicas administradas por via oral. Além disso, foram utilizadas formulações de crescimento de tumores conhecidas adicionando-se o fator de influência de uma dose única de droga quimioterápica.
|
197 |
Algebraic Theory of Minimal Nondeterministic Finite Automata with ApplicationsCazalis, Daniel S. 14 November 2007 (has links)
Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA's behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.
|
198 |
The Bayesian validation metric : a framework for probabilistic model calibration and validationTohme, Tony. January 2020 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, May, 2020 / Cataloged from the official PDF of thesis. / Includes bibliographical references (pages 109-114). / In model development, model calibration and validation play complementary roles toward learning reliable models. In this thesis, we propose and develop the "Bayesian Validation Metric" (BVM) as a general model validation and testing tool. We show that the BVM can represent all the standard validation metrics - square error, reliability, probability of agreement, frequentist, area, probability density comparison, statistical hypothesis testing, and Bayesian model testing - as special cases while improving, generalizing and further quantifying their uncertainties. In addition, the BVM assists users and analysts in designing and selecting their models by allowing them to specify their own validation conditions and requirements. Further, we expand the BVM framework to a general calibration and validation framework by inverting the validation mathematics into a method for generalized Bayesian regression and model learning. We perform Bayesian regression based on a user's definition of model-data agreement. This allows for model selection on any type of data distribution, unlike Bayesian and standard regression techniques, that "fail" in some cases. We show that our tool is capable of representing and combining Bayesian regression, standard regression, and likelihood-based calibration techniques in a single framework while being able to generalize aspects of these methods. This tool also offers new insights into the interpretation of the predictive envelopes in Bayesian regression, standard regression, and likelihood-based methods while giving the analyst more control over these envelopes. / by Tony Tohme. / S.M. / S.M. Massachusetts Institute of Technology, Computation for Design and Optimization Program
|
199 |
Meta-modeling and Optimization of Computational Fluid Dynamics (CFD) analysis in thermal comfort for energy-efficient Chilled Beams-based Heating, Ventilation and Air-Conditioning (HVAC) systemsGhanta, Nikhilesh. January 2020 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, May, 2020 / Cataloged from the official PDF of thesis. / Includes bibliographical references (pages 172-178). / With the rapid rise in the use of air conditioning systems and technological advancements, there is an ever-increasing need for optimizing the HVAC systems for energy efficiency while maintaining adequate occupant thermal comfort. HVAC systems in buildings alone contribute to almost 15% of the overall energy consumption across all sectors in the world and optimizing this would contribute positively towards overcoming climate change and reducing the global carbon footprint. A relatively modern solution is to implement a smart building-based control system and one of the objectives of this study is to understand the physical phenomenon associated with workspaces conditioned by chilled beams and evaluated the methods to reduce energy consumption. / Building upon the initial work aimed at creating a workflow for a smart building, this thesis presents the results of both experimental and computational studies of occupant thermal comfort with chilled beams (primarily in conference rooms) and the various inefficiencies associated. Results from these studies have helped to inform an optimum location for the installation of a chilled beam to counter the effects of incoming solar irradiation through an external window while keeping the energy consumption low. A detailed understanding of the various parameters influencing the temperature distribution in a room with chilled beams is achieved using CFD studies and data analysis of experimental data logging. / The work converges into a fundamental question of where, how, and what to measure to best monitor and control the human thermal comfort, and a novel technique was presented using the existing sensors which would provide a significant improvement over other existing methods in practice. This technique was validated using a series of experiments. The thesis concludes by presenting early works on hybrid HVAC systems including chilled beams and ceiling fans for higher economic gains. Future work should seek to perform CFD simulations for a better understanding of hybrid HVAC systems, both in conference rooms and open-plan office spaces, and also to design a new sensor that could better estimate human thermal comfort. / by Nikhilesh Ghanta. / S.M. / S.M. Massachusetts Institute of Technology, Computation for Design and Optimization Program
|
200 |
Modeling exascale data generation and storage for the large hadron collider computing networkMassaro, Evan K. January 2020 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, May, 2020 / Cataloged from the official PDF of thesis. / Includes bibliographical references (pages 85-86). / The Large Hadron Collider (LHC) is the world's largest and highest energy particle accelerator. With the particle collisions produced at the LHC and measured with the Compact Muon Solenoid (CMS) detector, the CMS experimental group performs precision measurements and general searches for new physics. Year-round CMS operations produce 100 Petabytes of physics data per year, which is stored within a globally distributed grid network of 70 scientific institutions. By 2027, upgrades to the LHC and CMS detector will allow unprecedented probes of microscopic physics, but in doing so generate 2,000 Petabytes (2 Exabytes) of physics data per year. To address the computational requirements of CMS, the cost of CPU resources, disk and tape storage, and tape drives were modeled. These resources were then used in a model of the major CMS computing processes and required infrastructure. / In addition to estimating budget requirements, this model produced bandwidth requirements, for which the transatlantic network cable was explicitly addressed. Given discrete or continuously parameterized policy decisions, the system cost and required network bandwidth could be modeled as a function of the policy. This sensitivity analysis was coupled to an uncertainty quantification of the model outputs, which were functions of the estimated system parameters. The expected value of the system cost and maximum transatlantic network activity were modeled to increase 40 times in 2027 relative to 2018. In 2027 the required transatlantic network capacity was modeled to have an expected value of 210 Gbps, with a 95% confidence interval that reaches 330 Gbps, just under the current bandwidth of 340 Gbps. By changing specific computing policies, the system cost and network load were shown to decrease. / Specific policies can reduce the network load to an expected value of 150 Gbps, with a 95% confidence interval that reaches 260 Gbps. Given the unprecedented volume of data, such policy changes can allow CMS to meet its future physics goals. / by Evan K. Massaro. / S.M. / S.M. Massachusetts Institute of Technology, Computation for Design and Optimization Program
|
Page generated in 0.1009 seconds