Spelling suggestions: "subject:"[een] REFINEMENT"" "subject:"[enn] REFINEMENT""
71 |
Using Phase-Field Modeling With Adaptive Mesh Refinement To Study Elasto-Plastic Effects In Phase TransformationsGreenwood, Michael 11 1900 (has links)
<p> This thesis details work done in the development of the phase field model which
allows simulation of elasticity with diffuse interfaces and the extension of a thin
interface analysis developed by previous authors to study non-dilute ideal alloys.
These models are coupled with a new finite difference adaptive mesh algorithm to
efficiently simulate a variety of physical systems. The finite difference adaptive
mesh algorithm is shown to be at worse 4-5 times faster than an equivalent finite element
method on a per node basis. In addition to this increase in speed for explicit
solvers in the code, an iterative solver used to compute elastic fields is found to
converge in O(N) time for a dynamically growing precipitate, where N is the number
of nodes on the adaptive mesh. A previous phase field formulation is extended
such as to make possible the study of non-ideal binary alloys with complex phase
diagrams. A phase field model is also derived for a free energy that incorporates an
elastic free energy and is used to investigate the competitive development of solid
state structures in which the kinetic transfer rate of atoms from the parent phase
to the precipitate phase is large. This results in the growth of solid state dendrites.
The morphological effects of competing surface anisotropy and anisotropy in the
elastic modulus tensor is analyzed. It is shown that the transition from surfaceenergy
driven dendrites to elastically driven dendrites depends on the magnitudes
of the surface energy anisotropy coefficient (E4 ) and the anisotropy of the elastic
tensor (β) as well as on the super saturation of the particle and therefore to a specific
Mullins-Sekerka onset radius. The transition point of this competitive process
is predicted from these three controlling parameters. </p> / Thesis / Doctor of Philosophy (PhD)
|
72 |
On the development of inhibitory projection neuronsSimon, Shane Joseph January 2023 (has links)
High precision is critical for normal neural circuit function, but that precision is not
innate. The location, strength, and number of inputs in a neural circuit are
modified in early postnatal development in a process called refinement. The
refinement of long-range excitatory projections is well-known, but less is known
about the refinement of long-range inhibitory projections. What we do know about
inhibitory projection refinement comes from the glycinergic medial nucleus to the
trapezoid body to lateral superior olive (MNTB-LSO) projection of the auditory
brainstem. During early postnatal life, the MNTB-LSO projection undergoes
morphological and physiological refinement. Notably, the MNTB-LSO projection
transiently expresses vesicular glutamate transporter 3 (VGLUT3) and
synaptotagmin 1 (Syt1), transiently releases glutamate, and undergoes
glutamate-dependent refinement. However, it remains uncertain whether
glutamate release is specific to the auditory brainstem or could be a more
general phenomenon of inhibitory projections.
To shed light on this question, I investigated another inhibitory projection of the
hindbrain, the GABAergic Purkinje projection of the cerebellum. The Purkinje
projection shares key characteristics with the MNTB-LSO projection, including its
inhibitory nature, location in the hindbrain, obvious topographic organization,
heterogeneity of the target cells, and expression of VGLUT3 transcript and
protein. In this thesis, I sought to determine: 1) whether the expression profile of
VGLUT3 and Syt1 in the Purkinje projection matches that of the MNTB-LSO
projection, and whether the Purkinje projection also releases glutamate, 2)
whether the expression profile of synaptic vesicle protein 2 (SV2) isoforms, SV2B
and SV2C, matches the expression profile of other synaptic vesicle proteins in
the Purkinje and MNTB-LSO projection, and 3) whether the Purkinje projection
undergoes postnatal morphological refinement like the MNTB-LSO projection. I
found that like the MNTB-LSO projection, the Purkinje projection transiently
expresses VGLUT3 and Syt1, releases glutamate in early postnatal life, and may
undergo morphological refinement. / Dissertation / Doctor of Philosophy (PhD) / Everything you do, whether it be playing your favorite sport or begrudgingly
reading this thesis, requires neural circuits, which are the basic functional unit of
the nervous system. How neurons are wired together is crucial for their role in
executing a task. But how these neurons fine-tune their connections – in a
process called refinement, by getting the right connections to the right location, of
the right strength, and of the right number – is an open-ended question in
neuroscience. Refinement is more well-studied in excitatory projection neurons,
but we know very little about how refinement occurs in inhibitory projection
neurons. I compare some of the unusual characteristics of what we do know
about inhibitory refinement in the auditory brainstem to another famous projection
of the hindbrain, the Purkinje projection. Understanding more about the
refinement of inhibitory projections gives key insights into how neural circuits
function and how they facilitate complex behaviours.
|
73 |
Code Verification Using the Method of Manufactured SolutionsMurali, Vasanth Kumar 13 December 2002 (has links)
Implementations of numerical simulations for solving systems of partial differential equations are often not verified and are falsely assumed to work correctly. As a result, the implementations are prone to coding errors that could degrade the accuracy of the solution. In order to ensure that a code is written correctly, rigorous verification of all parts of the code is necessary. Code verification is the task of ascertaining whether a numerical algorithm is solving the governing equations of the problem correctly. If an exact solution existed for the governing equations then verification would be easier but these solutions are rare because of the non-linearity of common Computational Fluid Dynamics (CFD) problems. In the absence of exact solutions, grid refinement studies are the most commonly used methods to verify codes using simulations on a sequence of grids but even these studies have limitations. The Method of Manufactured Solutions (MMS) is a novel and a recently developed technique that verifies the observed order-ofuracy of the implementation of a numerical algorithm. The method is more general and overcomes many of the limitations of the method of exact solutions and grid refinement studies. The central idea is to modify the governing equations and the boundary conditions by adding forcing functions or source terms in order to drive the discrete solution to a prescribed or ``manufactured' solution chosen a priori. A grid convergence study is performed subsequently to determine the observed orders. Two methods of accuracy assessment are presented here - solution accuracy analysis and residual error analysis. The method based on the error in the spatial residual is computationally less expensive and proved to be a valuable debugging tool. In the present work, the Method of Manufactured Solutions (MMS) is implemented on a compressible flow solver that solves the two-dimensional Euler equations on structured grids and an incompressible code that solves the two-dimensional Navier-Stokes equations on unstructured meshes. Exponential functions are used to ``manufacture' steady solutions to the governing equations. Solution and residual error analyses are presented. The influence of grid non-uniformity on the numerical accuracy is studied.
|
74 |
Surface Mesh Generation using Curvature-Based RefinementSinha, Bhaskar 13 December 2002 (has links)
Surface mesh generation is a critical component of the mesh generation process. The objective of the described effort was to determine if a combination of constrained Delaunay triangulation (for triangles), advancing front method (for quadrilaterals), curvature-based refinement, smoothing, and reconnection is a viable approach for discretizing a NURBS patch holding the boundary nodes fixed. The approach is significant when coupled with recently developed geometry specification that explicitly identifies common edges. This thesis describes the various techniques used to achieve the above objectives. Application of this approach to several representative geometries demonstrates that it is an effective alternative to traditional approaches.
|
75 |
Optical and X-Ray Diffraction Analyses of Shock Metamorphosed Knox Group Dolostone from Wells Creek Crater, TennesseeSeeley, Jack R. 01 October 2018 (has links)
No description available.
|
76 |
Myth, Mysticism and Morality in Russell Hoban's Later FictionSmith, Joan P. 10 1900 (has links)
This thesis considers the movement away from anthropocentrism towards mythocentrism in Russell Hoban's later fiction. An analysis of the nature and results of the juxtapositions of myth, science, collective history and personal crisis in the following novels exemplifies this, his essentially revisionist, philosophy: Riddley Walker(1980), Pilgermann(1983) and The Medusa Freguency(1987). In turn, these novels bring Celtic/Christian, Judea-Islamic and Greco-Roman myth to bear upon various rational scientific societies and characters. In all cases transcendent moments edify the principal characters, whilst alienating them from their societies; in some instances social harmony is restored.
This multicultural comparison reveals in Hoban's method a growing concern for (collective and individual) moral and spiritual refinement. As the characters become less anthropocentric and more myth-centred, their transformations towards sexual maturity parallel similar changes in their attitude to myth. They move from destructive behaviour to creative. The observed spiritual growth, from fear and resignation, through faith and liberation, to baptised imagination, provides the structure for the analysis and interpretation of the three novels. / Thesis / Master of Arts (MA)
|
77 |
On Multi-Scale Refinement of Discrete DataDehghani Tafti, Pouya 10 1900 (has links)
<p> It is possible to interpret multi-resolution analysis from both Fourier-domain and temporal/spatial domain stand-points. While a Fourier-domain interpretation helps in designing a powerful machinery for multi-resolution refinement on regular point-sets and lattices, most of its techniques cannot be directly generalized to the case of irregular sampling. Therefore, in this thesis we provide a new definition and formulation of multi-resolution refinement, based on a temporal/spatial-domain understanding, that is general enough to allow multi-resolution approximation of different spaces of functions by processing samples (or observations) that can be irregularly distributed or even obtained using different sampling methods. We then continue to provide a construction for designing and implementing classes of refinement schemes in these general settings. The framework for multi-resolution refinement that we discuss includes and extends the existing mathematical machinery for multi-resolution analysis; and the suggested construction unifies many of the schemes currently in use, and, more importantly, allows designing schemes for many new settings. </p> / Thesis / Master of Applied Science (MASc)
|
78 |
Designing, building and testing a UV photouncaging system to study the development of the auditory brainstemKathir, Arjun 11 1900 (has links)
New abstract (saved as much of your structure/wording as possible):
In mammals, sound localization along the azimuth is computed in part in the lateral superior olive (LSO), a binaural nucleus in the brainstem. Information about the location of the sound source is derived from differences in sound intensity at the two ears, the Interaural Level Difference (ILD). Within each LSO, principal cells compute ILDs by integrating an excitatory input carrying intensity information from the ipsilateral ear with an inhibitory input carrying intensity information from the contralateral ear. This computation requires that the phenotypically distinct inputs onto individual LSO cells be matched for sound frequency. The process of ‘aligning’ and refining the inputs for frequency information occurs during the first few postnatal weeks in rats, through modifications of synapse strength and cell morphology. Our lab studies the distribution, and re-distribution, of these converging inputs during the early period of circuit refinement.
A common strategy for examining spatial distribution of synapses is through anatomical techniques, including for example immunohistological methods for localizing specific synaptic proteins. Ultimately, however, we need to understand how synapse position affects the functional response. Asking this kind of question requires the ability to stimulate individual synapses while recording from dendrite or cell body, an approach for which we use laser scanning photostimulation (LSPS). I designed two LSPS systems in order to stimulate the post-synaptic sites of excitatory or inhibitory inputs on LSO principal neurons while recording at the cell body using whole-cell patch clamp. I researched many optical designs and technologies when fine-tuning my design. My designs and initial groundwork will help a future lab member finish one or both of the LSPS designs. / Thesis / Master of Science (MSc)
|
79 |
Exploring Abstraction Techniques for Scalable Bit-Precise Verification of Embedded SoftwareHe, Nannan 01 June 2009 (has links)
Conventional testing has become inadequate to satisfy rigorous reliability requirements of embedded software that is playing an increasingly important role in many safety critical applications. Automatic formal verification is a viable avenue for ensuring the reliability of such software. Recently, more and more formal verification techniques have begun modeling a non-Boolean data variable as a bit-vector with bounded width (i.e. a vector of multiple bits like 32- or 64- bits) to implement bit-precise verification. One major challenge in the scalable application of such bit-precise verification on real-world embedded software is that the state space for verification can be intractably large.
In this dissertation, several abstraction techniques are explored to deal with this scalability challenge in the bit-precise verification of embedded software. First, we propose a tight integration of program slicing, which is an important static program analysis technique, with bounded model checking (BMC). While many software verification tools apply program slicing as a separate preprocessing step, we integrate slicing operations into our model construction and reduction process and enhance them with compilation optimization techniques to compute accurate program slices. We also apply a proof-based abstraction-refinement framework to further remove those program segments irrelevant to the property being verified.
Next, we present a method of using symbolic simulation for scalable formal verification. The simulation involves distinguishing X as symbolic values to abstract concrete variables' values. Also, the method embeds this symbolic simulation in a counterexample-guided abstraction-refinement framework to automatically construct and verify an abstract model, which has a smaller state space than that of the original concrete program.
This dissertation also presents our efforts on using two common testability metrics — controllability metric (CM) and observability metric (OM) — as the high-level structural guidance for scalable bit-precise verification. A new abstraction approach is proposed based on the concept of under- and over-approximation to efficiently solve bit-vector formulas generated from embedded software verification instances. These instances include both complicated arithmetic computations and intensive control structures. Our approach applies CM and OM to assist the abstraction refinement procedure in two ways: (1) it uses CM and OM to guide the construction of a simple under-approximate model, which includes only a subset of execution paths in a verification instance, so that a counterexample that refutes the instance can be obtained with reduced effort, and (2) in order to reduce the cost of using proof-based refinement alone, it uses OM heuristics to guide the restoration of additional verification-relevant formula constraints with low computational cost for refinement. Experiments show a significant reduction of the solving time compared to state-of-the-art solvers for the bit-vector arithmetic.
This dissertation finally proposes an efficient algorithm to discover non-uniform encoding widths of individual variables in the verification model, which may be smaller than their original modeling width but sufficient for the verification. Our algorithm distinguishes itself from existing approaches in that it is path-oriented; it takes advantage of CM and OM values to guide the computation of the initial, non-uniform encoding widths, and the effective adjustment of these widths along different paths, until the property is verified. It can restrict the search from those paths that are deemed less favorable or have been searched in previous steps, thus simplifying the problem. Experiments demonstrate that our algorithm can significantly speed up the verification especially in searching for a counterexample that violates the property under verification. / Ph. D.
|
80 |
Adjoint-based space-time adaptive solution algorithms for sensitivity analysis and inverse problemsAlexe, Mihai 14 April 2011 (has links)
Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method.
This dissertation develops a complete framework for fully discrete adjoint sensitivity analysis and inverse problem solutions, in the context of time dependent, adaptive mesh, and adaptive step models. The discrete framework addresses all the necessary ingredients of a state–of–the–art adaptive inverse solution algorithm: adaptive mesh and time step refinement, solution grid transfer operators, a priori and a posteriori error analysis and estimation, and discrete adjoints for sensitivity analysis of flux–limited numerical algorithms. / Ph. D.
|
Page generated in 0.0539 seconds