1 |
Multi-scale tilt depth estimationVan Buren, Reece 04 March 2014 (has links)
Many an approach to the estimation of magnetic source depths from
magnetic data has been investigated over the past half a century. These
approaches have been shown to have particular strengths and
weaknesses with few implemented on a wide scale, commercial basis. A
review of many of the more popular, as well as a few of the more obscure
methods, is presented within this work.
The history of multi-scale computation, with emphasis on its application to
potential fields is summarized. A newly developed depth estimation
technique dubbed Multi-Scale Tilt Depth Estimation is offered. The method
has been shown to derive suitable depth estimates when applied to
modelled data computed from a range of simple synthetic models.
Sensitivity of the method to model type, dip, interference and noise has
been tested. A number of mitigating strategies to improve and stabilize the
method’s performance have been proposed. Results of the successful
application of the method to field datasets from the Bushveld Complex and
surrounding areas in South Africa are shown. Code to execute the
method, written in Matlab is offered in Appendix A. Figures of the
application of the method to all synthetic models have been included in
Appendix B and C.
A portion of this work has been presented at the South African
Geophysical Association’s 11th Biennial Technical Meeting and Exhibition
in the form of verbal and poster presentations accompanied by a short
paper which is included here in Appendix D.
|
2 |
Integrated approaches to the optimal design of multiscale systemsLovelady, Eva Marie 15 May 2009 (has links)
This work is aimed at development of systematic approaches to the design of
multiscale systems. Specifically four problems are addressed: environmental impact
assessment (EIA) of new and retrofitted industrial processes, integration of process
effluents with the macroscopic environmental systems, eco-industrial parks (EIP), and
advanced life support (ALS) systems for planetary habitation. While design metrics
and specific natures of each problem poses different challenges, there are common
themes in the devised solution strategies:
a. An integrated approach provides insights unseen by addressing the individual
components of the system and, therefore, better understanding and superior
results.
b. Instead of dealing with multiple scales simultaneously, the design problem is
addressed through interconnected stages without infringing upon the
optimization degrees of freedom in each stage. This is possible through the
concept of targeting.
c. Mathematical programming techniques can be used effectively to systematize
the integration concepts, the target identification, and the design of multi-scale
systems. The dissertation also introduces the following specific contributions:
i. For EIA, a new procedure is developed to overcome the limitations of
conventional approaches. The introduced procedure is based on three
concepts: process synthesis for systematic generation of alternatives and
targeting for benchmarking environmental impact ahead of detailed design,
integration of alternative with rest of the process, and reverse problem
formulation for targeting.
ii. For integrating process effluents with macroscopic environmental systems,
focus is given to the impact of wastewater discharges on macroscopic
watersheds and drainage systems. A reverse problem formulation is
introduced to determine maximum allowable process discharges that will
meet overall environmental requirements of the watershed.
iii. For EIPs, a new design procedure is developed to allow multiple processes
to share a common environmental infrastructure, exchange materials, and
jointly utilize interception systems that treat waste materials and byproducts.
A source-interception-sink representation is developed and modeled through
an optimization formulation. Optimal interactions among the various
processes and shared infrastructure to be installed are identified.
iv. A computational metric is introduced to compare various alternatives in ALS
and planetary habitation systems. A selection criterion identifies the
alternative which contributes to the maximum reduction of the total ESM of
the system.
|
3 |
Multiscale numerical methods for some types of parabolic equationsNam, Dukjin 15 May 2009 (has links)
In this dissertation we study multiscale numerical methods for nonlinear parabolic
equations, turbulent diffusion problems, and high contrast parabolic equations. We
focus on designing and analysis of multiscale methods which can capture the effects
of the small scale locally.
At first, we study numerical homogenization of nonlinear parabolic equations
in periodic cases. We examine the convergence of the numerical homogenization
procedure formulated within the framework of the multiscale finite element method.
The goal of the second problem is to develop efficient multiscale numerical techniques
for solving turbulent diffusion equations governed by celluar flows. The solution near
the separatrices can be approximated by the solution of a system of one dimensional
heat equations on the graph. We study numerical implementation for this asymptotic
approach, and spectral methods and finite difference scheme on exponential grids are
used in solving coupled heat equations. The third problem we study is linear parabolic
equations in strongly channelized media. We concentrate on showing that the solution
depends on the steady state solution smoothly.
As for the first problem, we obtain quantitive estimates for the convergence of
the correctors and some parts of truncation error. These explicit estimates show us
the sources of the resonance errors. We perform numerical implementations for the
asymptotic approach in the second problem. We find that finite difference scheme with exponential grids are easy to implement and give us more accurate solutions
while spectral methods have difficulties finding the constant states without major
reformulation. Under some assumption, we justify rigorously the formal asymptotic
expansion using a special coordinate system and asymptotic analysis with respect to
high contrast for the third problem.
|
4 |
Numerical methods for multiscale inverse problemsFrederick, Christina A 25 June 2014 (has links)
This dissertation focuses on inverse problems for partial differential equations with multiscale coefficients in which the goal is to determine the coefficients in the equation using solution data. Such problems pose a huge computational challenge, in particular when the coefficients are of multiscale form. When faced with balancing computational cost with accuracy, most approaches only deal with models of large scale behavior and, for example, account for microscopic processes by using effective or empirical equations of state on the continuum scale to simplify computations. Obtaining these models often results in the loss of the desired fine scale details. In this thesis we introduce ways to overcome this issue using a multiscale approach. The first part of the thesis establishes the close relation between computational grids in multiscale modeling and sampling strategies developed in information theory. The theory developed is based on the mathematical analysis of multiscale functions of the type that are studied in averaging and homogenization theory and in multiscale modeling. Typical examples are two-scale functions f (x, x/[epsilon]), (0 < [epsilon] ≪ 1) that are periodic in the second variable. We prove that under certain band limiting conditions these multiscale functions can be uniquely and stably recovered from nonuniform samples of optimal rate. In the second part, we present a new multiscale approach for inverse homogenization problems. We prove that in certain cases where the specific form of the multiscale coefficients is known a priori, imposing an additional constraint of a microscale parametrization results in a well-posed inverse problem. The mathematical analysis is based on homogenization theory for partial differential equations and classical theory of inverse problems. The numerical analysis involves the design of multiscale methods, such as the heterogeneous multiscale method (HMM). The use of HMM solvers for the forward model has unveiled theoretical and numerical results for microscale parameter recovery, including applications to inverse problems arising in exploration seismology and medical imaging. / text
|
5 |
Developing dual-scale models for structured liquids and polymeric materialsGowers, Richard January 2016 (has links)
Computer simulation techniques for exploring the microscopic world are quickly gaining popularity as a tool to complement theoretical and experimental approaches. Molecular dynamics (MD) simulations allow the motion of an N–body soft matter system to be solved using a classical mechanics description. The scope of these simulations are however limited by the available computational power, requiring the development of multiscale methods to make better use of available resources. Dual scale models are a novel form of molecular model which simultaneously feature particles at two levels of resolution. This allows a combination of atomistic and coarse-grained (CG) force fields to be used to describe the interactions between particles. By using this approach, targeted details in a molecule can be described at high resolution while other areas are treated with fewer degrees of freedom. This approach aims to allow for simulating the key features of a system at a reduced computational cost. In this thesis, two generations of a methodology for constructing dual scale models are presented and applied to various materials including polyamide, polyethene, polystyrene and octanol. Alongside a variety of well known atomistic force fields, these models all use iterative Boltzmann inversion (IBI) force fields to describe the CG interactions. In addition the algorithms and data structures for implementing dual scale MD are detailed, and expanded to include a multiple time step (MTS) scheme for optimising its peformance. Overall the IBI and atomistic force fields were compatible with each other and able to correctly reproduce the expected structural results. The first generation methodology featured bonds directly between atoms and beads, however these did not produce the correct structures. The second generation used only atomistic resolution bonds and this improved the intramolecular structures greatly for a relatively minor cost. In both the polyamide and octanol systems studied, the models were also able to properly describe the hydrogen bonding. For the CG half of the force field, it was possible to either use preexisting force field parameters or develop new parameters in situ. The resulting dynamical behaviour of the models was unpredictable and remains an open question both for CG and dual scale models. The theoretical performance of these models is faster than the atomistic counterpart because of the reduced number of pairwise interactions that must be calculated and this scaling was seen with the proposed reference implementation. The MTS scheme was successful in improving the performance with no effects on the quality of results. In summary this work has shown that dual scale models are able to correctly reproduce the structural behaviour of atomistic models at a reduced computational cost. With further steps towards making these models more accessible, they will become an exciting new option for many types of simulation.
|
6 |
Prediction of material fracture toughness as function of microstructureLi, Yan 12 January 2015 (has links)
Microstructure determines fracture toughness of materials through the activation of different fracture mechanisms. To tailor the fracture toughness through microstructure design, it is important to establish relations between microstructure and fracture toughness. To this end, systematic characterization of microstructures, explicit tracking of crack propagation process and realistic representation of deformation and fracture at different length scales are required. A cohesive finite element method (CFEM) based multiscale framework is proposed for analyzing the effect of microstructural heterogeneity, phase morphology, texture, constituent behavior and interfacial bonding strength on fracture toughness. The approach uses the J-integral to calculate the initiation/propagation fracture toughness, allowing explicit representation of realistic microstructures and fundamental fracture mechanisms.
Both brittle and ductile materials can be analyzed using this framework. For two-phase Al₂O₃/TiB₂ ceramics, the propagation fracture toughness is improved through fine microstructure size scale, rounded reinforcement morphology and appropriately balanced interphase bonding strength and compliance. These microstructure characteristics can promote interface debonding and discourage particle cracking induced catastrophic failure. Based on the CFEM results, a semi-empirical model is developed to establish a quantitative relation between the propagation toughness and statistical measures of microstructure, fracture mechanisms, constituent and interfacial properties. The analytical model provides deeper insights into the fracture process as it quantitatively predicts the proportion of each fracture mechanism in the heterogeneous microstructure. Based on the study on brittle materials, the semi-analytical model is extended to ductile materials such as AZ31 Mg alloy and Ti-6Al-4V alloy. The fracture resistance in these materials not only depends on the crack surfaces formed during the failure process, but also largely determined by the bulk plastic energy dissipation. The CFEM simulation permits surface energy release rate to be quantified through explicit tracking of crack propagation in the microstructure. The plastic energy dissipation rate is evaluated as the difference between the predicted J value and the surface energy release rate. This method allows competition between material deformation and fracture as well as competition between transgranular and intergranular fracture to be quantified. The methodology developed in this thesis is potentially useful for both the selection of materials and tailoring of microstructure to improve fracture resistance.
|
7 |
New results in the multiscale analysis on perforated domains and applicationsOnofrei, Daniel T 23 April 2007 (has links)
Multiscale phenomena implicitly appear in every physical model. The understanding of the general behavior of a given model at different scales and how one can correlate the behavior at two different scales is essential and can offer new important information. This thesis describes a series of new techniques and results in the analysis of multi-scale phenomena arising in PDEs on variable geometries. In the Second Chapter of the thesis, we present a series of new error estimate results for the periodic homogenization with nonsmooth coefficients. For the case of smooth coefficients, with the help of boundary layer correctors, error estimates results have been obtained by several authors (Oleinik, Lions, Vogelius, Allaire, Sarkis). Our results answer an open problem in the case of nonsmooth coefficients. Chapter 3 is focused on the homogenization of linear elliptic problems with variable nonsmooth coefficients and variable domains. Based on the periodic unfolding method proposed by Cioranescu, Damlamian and Griso in 2002, we propose a new technique for homogenization in perforated domains. With this new technique classical results are rediscovered in a new light and a series of new results are obtained. Also, among other advantages, the method helps one prove better corrector results. Chapter 4 is dedicated to the study of the limit behavior of a class of Steklov-type spectral problems on the Neumann sieve. This is equivalent with the limit analysis for the DtN-map spectrum on the sieve and has applications in the stability analysis of the earthquake nucleation phase model studied in Chapter 5. In Chapter 5, a $Gamma$-convergence result for a class of contact problems with a slip-weakening friction law, is described. These problems are associated with the modeling of the nucleation phase in earthquakes. Through the $Gamma$-limit we obtain an homogenous friction law as a good approximation for the local friction law and this helps us better understand the global behavior of the model, making use of the micro-scale information. As to our best knowledge, this is the first result proposing a homogenous friction law for this earthquake nucleation model.
|
8 |
Multiscale Spectral Residue for Faster Image Object DetectionSilva Filho, Jose Grimaldo da 18 January 2013 (has links)
Submitted by Diogo Barreiros (diogo.barreiros@ufba.br) on 2017-02-06T16:59:36Z
No. of bitstreams: 1
dissertacao_mestrado_jose-grimaldo.pdf: 19406681 bytes, checksum: d108855fa0fb0d44ee5d1cb59579a04c (MD5) / Approved for entry into archive by Vanessa Reis (vanessa.jamile@ufba.br) on 2017-02-07T11:51:58Z (GMT) No. of bitstreams: 1
dissertacao_mestrado_jose-grimaldo.pdf: 19406681 bytes, checksum: d108855fa0fb0d44ee5d1cb59579a04c (MD5) / Made available in DSpace on 2017-02-07T11:51:58Z (GMT). No. of bitstreams: 1
dissertacao_mestrado_jose-grimaldo.pdf: 19406681 bytes, checksum: d108855fa0fb0d44ee5d1cb59579a04c (MD5) / Accuracy in image object detection has been usually achieved at the expense of much computational load. Therefore a trade-o between detection performance and fast execution commonly represents the ultimate goal of an object detector in real life applications. Most images are composed of non-trivial amounts of background information, such as sky, ground and water. In this sense, using an object detector against a recurring background pattern can require a signi cant amount of the total processing time. To alleviate this problem, search space reduction methods can help focusing the detection procedure on more distinctive image regions. / Among the several approaches for search space reduction, we explored saliency information
to organize regions based on their probability of containing objects. Saliency
detectors are capable of pinpointing regions which generate stronger visual stimuli based
solely on information extracted from the image. The fact that saliency methods do not
require prior training is an important benefit, which allows application of these techniques
in a broad range of machine vision domains. We propose a novel method toward the goal
of faster object detectors. The proposed method was grounded on a multi-scale spectral
residue (MSR) analysis using saliency detection. For better search space reduction, our
method enables fine control of search scale, more robustness to variations on saliency intensity
along an object length and also a direct way to control the balance between search
space reduction and false negatives caused by region selection. Compared to a regular
sliding window search over the images, in our experiments, MSR was able to reduce by
75% (in average) the number of windows to be evaluated by an object detector while
improving or at least maintaining detector ROC performance. The proposed method was
thoroughly evaluated over a subset of LabelMe dataset (person images), improving detection
performance in most cases. This evaluation was done comparing object detection
performance against different object detectors, with and without MSR. Additionally, we
also provide evaluation of how different object classes interact with MSR, which was done
using Pascal VOC 2007 dataset. Finally, tests made showed that window selection performance
of MSR has a good scalability with regard to image size. From the obtained data,
our conclusion is that MSR can provide substantial benefits to existing sliding window
detectors
|
9 |
Micromechanically based multiscale material modeling of polymer nanocompositesYu, Jaesang 30 April 2011 (has links)
The Effective Continuum Micromechanics Analysis Code (EC-MAC) was developed for predicting effective properties of composites containing multiple distinct nanoheterogeneities (fibers, spheres, platelets, voids, etc.) each with an arbitrary number of coating layers based upon either the modified Mori-Tanaka method (MTM) and self consistent method (SCM). This code was used to investigate the effect of carbon nanofiber morphology (i.e., hollow versus solid cross-section), nanofiber waviness, and both nanofiber-resin interphase properties and dimensions on bulk nanocomposite elastic moduli. For a given nanofiber axial force-displacement relationship, the elastic modulus for hollow nanofibers can significantly exceed that for solid nanofibers resulting in notable differences in bulk nanocomposite properties. The development of a nanofiber-resin interphase had a notable effect on the bulk elastic moduli. Consistent with results from the literature, small degrees of nanofiber waviness resulted in a significant decrease in effective composite properties. Key aspects of nanofiber morphology were characterized using transmission electron microscopy (TEM) images for VGCNF/vinyl ester (VE) nanocomposites. Three-parameter Weibull probability density functions were generated to describe the statistical variation in nanofiber outer diameters, wall thicknesses, relative wall thicknesses, visible aspect ratios, and visible waviness ratios. Such information could be used to establish more realistic nanofiber moduli and strengths obtained from nanofiber tensile tests, as well as to develop physically motivated computational models for predicting nanocomposite behavior. This study represents one of the first attempts to characterize the distribution of VGCNF features in real thermoset nanocomposites. In addition, the influence of realistic nanoreinforcement geometries, distinct elastic properties, and orientations on the effective elastic moduli was addressed. The effect of multiple distinct heterogeneities, including voids, on the effective elastic moduli was investigated. For the composites containing randomly oriented wavy vapor grown carbon nanofibers (VGCNFs) and voids, the predicted moduli captured the essential character of the experimental data, where the volume fraction of voids was approximated as a nonlinear function of the volume fraction of reinforcements. This study should facilitate the development of multiscale materials design by providing insight into the relationships between nanomaterial morphology and properties across multiple spatial scales that lead to improved macroscale performance.
|
10 |
Multiscale and Dirichlet Methods for Supply Chain Order SimulationSabin, Robert Paul Travers 23 April 2019 (has links)
Supply chains are complex systems. Researchers in the Social and Decision Analytics Laboratory (SDAL) at Virginia Tech worked with a major global supply chain company to simulate an end-to-end supply chain. The supply chain data includes raw materials, production lines, inventory, customer orders, and shipments. Including contributions of this author, Pires, Sabin, Higdon et al. (2017) developed simulations for the production, customer orders, and shipments. Customer orders are at the center of understanding behavior in a supply chain. This dissertation continues the supply chain simulation work by improving the order simulation. Orders come from a diverse set of customers with different habits. These habits can differ when it comes to which products they order, how often they order, how spaced out those orders times are, and how much of each of those products are ordered. This dissertation is unique in that it relies extensively on Dirichlet and multiscale methods to tackle supply-chain order simulation. Multiscale model methodology is furthered to include Dirichlet models which are used to simulate order times for each customer and the collective system on different scales. / Doctor of Philosophy / This dissertation continues the supply chain simulation work of researchers (Pires et al. (2017)) in the Social and Decision Analytics Laboratory (SDAL) at Virginia Tech by improving the order simulation. Orders come from a diverse set of customers with different habits. These habits can di er when it comes to which products they order, how often they order, how spaced out those orders times are, and how much of each of those products are ordered. This dissertation is unique from the previous work at SDAL which considered few of these factors in order simulation and introduces statistical methodologies to deal with the complex nature of simulating an entire supply chain's orders.
|
Page generated in 0.1011 seconds