• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 12
  • 10
  • 10
  • 3
  • 1
  • Tagged with
  • 211
  • 211
  • 54
  • 51
  • 49
  • 45
  • 43
  • 40
  • 40
  • 37
  • 32
  • 29
  • 29
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Geographic and demographic transmission patterns of the 2009 A/H1N1 influenza pandemic in the United States

Kissler, Stephen Michael January 2018 (has links)
This thesis describes how transmission of the 2009 A/H1N1 influenza pandemic in the United States varied geographically, with emphasis on population distribution and age structure. This is made possible by the availability of medical claims records maintained in the private sector that capture the weekly incidence of influenza-like illness in 834 US cities. First, a probabilistic method is developed to infer each city's outbreak onset time. This reveals a clear wave-like pattern of transmission originating in the south-eastern US. Then, a mechanistic mathematical model is constructed to describe the between-city transmission of the epidemic. A model selection procedure reveals that transmission to a city is modulated by its population size, surrounding population density, and possibly by students mixing in schools. Geographic variation in transmissibility is explored further by nesting a latent Gaussian process within the mechanistic transmission model, revealing a possible region of elevated transmissibility in the south-eastern US. Then, using the mechanistic model and a probabilistic back-tracing procedure, the geographic introduction sites (the `transmission hubs') of the outbreak are identified. The transmission hubs of the 2009 pandemic were generally mid-sized cities, contrasting with the conventional perspective that major outbreaks should start in large population centres with high international connectivity. Transmission is traced forward from these hubs to identify `basins of infection', or regions where outbreaks can be attributed with high probability to a particular hub. The city-level influenza data is also separated into 12 age categories. Techniques adapted from signal processing reveal that school-aged children may have been key drivers of the epidemic. Finally, to provide a point of comparison, the procedures described above are applied to the 2003-04 and 2007-08 seasonal influenza outbreaks. Since the 2007-08 outbreak featured three antigenically distinct strains of influenza, it is possible to identify which antigenic strains may have been responsible for infecting each transmission hub. These strains are identified using a probabilistic model that is joined with the geographic transmission model, providing a link between population dynamics and molecular surveillance.
182

Extending standard outdoor noise propagation models to complex geometries / Extension des modèles standards de propagation du bruit extérieur pour des géométries complexes

Kamrath, Matthew 28 September 2017 (has links)
Les méthodes d'ingénierie acoustique (e.g. ISO 9613-2 ou CNOSSOS-EU) approchent efficacement les niveaux de bruit générés par les routes, les voies ferrées et les sources industrielles en milieu urbain. Cependant, ces approches d'ingénierie sont limitées à des géométries de forme simple, le plus souvent de section rectangulaire. Ce mémoire développe donc, et valide, une approche hybride permettant l'extension des méthodes d'ingénierie à des formes plus complexes, en introduisant un terme d’atténuation supplémentaire qui représente l'effet d'un objet réel comparé à un objet simple.Le calcul de cette atténuation supplémentaire nécessite des calculs de référence, permettant de quantifier la différence entre objets simple et complexe. Dans la mesure où il est trop onéreux, numériquement, '’effectuer ce calcul pour tous les chemins de propagation, l'atténuation supplémentaire est obtenue par interpolation de données stockées dans un tableau et évaluées pour un large jeu de positions de sources, de récepteurs et de fréquences. Dans notre approche, le calcul de référence utilise la méthode BEM en 2.5D, et permet ainsi de produire les niveaux de référence pour les géométries simple et complexe, tout en tabulant leur écart. Sur le principe, d'autres approches de référence pourraient être utilisées.Ce travail valide cette approche hybride pour un écran en forme de T avec un sol rigide, un sol absorbant et un cas avec bâtiments. Ces trois cas démontrent que l'approche hybride est plus précise que l'approche d’ingénierie standard dans des cas complexes. / Noise engineering methods (e.g. ISO 9613-2 or CNOSSOS-EU) efficiently approximate sound levels from roads, railways, and industrial sources in cities. However, engineering methods are limited to only simple box-shaped geometries. This dissertation develops and validates a hybrid method to extend the engineering methods to more complicated geometries by introducing an extra attenuation term that represents the influence of a real object compared to a simplified object.Calculating the extra attenuation term requires reference calculations to quantify the difference between the complex and simplified objects. Since performing a reference computation for each path is too computationally expensive, the extra attenuation term is linearly interpolated from a data table containing the corrections for many source and receiver positions and frequencies. The 2.5D boundary element method produces the levels for the real complex geometry and a simplified geometry, and subtracting these levels yields the corrections in the table.This dissertation validates this hybrid method for a T-barrier with hard ground, soft ground, and buildings. All three cases demonstrate that the hybrid method is more accurate than standard engineering methods for complex cases.
183

Adaptive methods for autonomous environmental modelling

Kemppainen, A. (Anssi) 26 March 2018 (has links)
Abstract In this thesis, we consider autonomous environmental modelling, where robotic sensing platforms are utilized in environmental surveying. In order to allow a wide range of different environments, our models must be flexible to the data with some a prior assumptions. Respectively, in order to guide action planning, we need to have a unified sensing quality metric that depends on the prediction quality of our models. Finally, in order to be able to adapt to the observed information, at each iteration of the action planning algorithm, we must be able to provide solutions that aim at minimum travelling time needed to reach a certain level of sensing quality. These are the main topics in this thesis. At the center of our approaches are stationary and non-stationary Gaussian processes based on the assumption that the observed phenomenon is due to the diffusion of white noise, where diffusion kernel anisotropy and scale may vary between locations. For these models, we propose adaptation of diffusion kernels based on a structure tensor approach. Proposed methods are demonstrated with experiments that show, assuming sensor noise is not dominating, our iterative approach is able to return diffusion kernel values close to correct ones. In order to quantify how precise our models are, we propose a mutual information based sensing quality criterion, and prove that the optimal design using our sensing quality provides the best prediction quality for the model. To incorporate localization uncertainty in modelling, we also propose an approach where a posterior model is marginalized over sensing path distribution. The benefit is that this approach implicitly favors actions that result in previously visited or otherwise well-defined areas, meanwhile, maximizing the information gain. Experiments support our claims that our proposed approaches are best when considering predictive distribution quality. In action planning, our approach is to use graph-based approximation algorithms to obtain a certain level of model quality in an efficient way. In order account for spatial dependency and active localization, we propose adaptation methods that map sensing quality to vertex prices in a graph. Experiments demonstrate the benefit of our adaptation methods compared to the action planning algorithms that do not consider these specific features. / Tiivistelmä Tässä väitöskirjassa tarkastellaan autonomista ympäristön mallinnusta, missä ympäristön kartoitukseen hyödynnetään robottimittausalustoja. Erilaisia ympäristöjä varten, käytettävien mallien tulee olla joustavia datalle tietyillä a priori oletuksilla. Mittausalustojen ohjaus vaatii vastaavasti yhtenäisen, mallien ennustuslaadusta riippuvan, kartoituksen laatumetriikan. Mukautuakseen uuteen informaatioon, ohjausalgoritmin tulee lisäksi pyrkiä joka iteraatiolla minimoimaan tietyn kartoituksen laadun saavuttava kulkuaika. Nämä ovat tämän väitöskirjan pääaiheet. Tämän väitöskirjan keskiössä ovat sellaiset stationaariset ja ei-stationaariset Gaussin prosessit, jotka perustuvat oletukseen että havaittu ilmiö johtuu valkoisen kohinan diffuusiosta. Diffuusiokernelin anisotrooppisuudelle ja skaalalle sallitaan paikkariippuvaisuus. Tässä väitöskirjassa esitetään näiden mallien mukauttamiseen rakennetensoripohjaisia menetelmiä. Suoritetut kokeet osoittavat, että esitetyt iteratiiviset mukauttamismenetelmät tuottavat lähes oikeita diffuusiokernelien arvoja, olettaen, että sensorikohina ei dominoi mittauksia. Mallien ennustustarkkuuden määrittämiseen esitetään keskinäisinformaatioon perustuva kartoituksen laatumetriikka. Väitöskirjassa todistetaan, että optimaalinen ennustuslaatu saavutetaan käyttämällä esitettyä laatumetriikkaa. Väitöskirjassa esitetään lisäksi laatumetriikka, jossa posteriori malli on marginalisoitu kartoituspolkujen jakauman yli. Tämän avulla voidaan huomioida paikannusepävarmuuden vaikutukset mallinnuksessa. Tällöin etuna on se, että kyseinen laatumetriikka suosii implisiittisesti sellaisia mittausalustojen ohjauksia, jotka johtavat aeimmin kartoitetuille tai helposti ennustettaville alueille samalla maksimoiden informaatiohyödyn. Suoritetut kokeet tukevat väittämiä, että väitöskirjassa esitetyt menetelmät tuottavat parhaan ennustusjakauman laadun. Mittausalustojen ohjaus vaatii vastaavasti yhtenäisen, mallien ennustuslaadusta riippuvan, kartoituksen laatumetriikan. Väitöskirjassa esitetään mukautusmenetelmiä kartoituksen laadun kuvaukseksi graafin solmujen kustannuksiksi. Tämän avulla sallitaan sekä spatiaalinen riippuvuus että aktiivinen paikannus. Mittausalustojen ohjaus vaatii vastaavasti yhtenäisen, mallien ennustuslaadusta riippuvan, kartoituksen laatumetriikan.
184

Road features detection and sparse map-based vehicle localization in urban environments / Detecção de características de rua e localização de veículos em ambientes urbanos baseada em mapas esparsos

Alberto Yukinobu Hata 13 December 2016 (has links)
Localization is one of the fundamental components of autonomous vehicles by enabling tasks as overtaking, lane keeping and self-navigation. Urban canyons and bad weather interfere with the reception of GPS satellite signal which prohibits the exclusive use of such technology for vehicle localization in urban places. Alternatively, map-aided localization methods have been employed to enable position estimation without the dependence on GPS devices. In this solution, the vehicle position is given as the place that best matches the sensor measurement to the environment map. Before building the maps, feature sof the environment must be extracted from sensor measurements. In vehicle localization, curbs and road markings have been extensively employed as mapping features. However, most of the urban mapping methods rely on a street free of obstacles or require repetitive measurements of the same place to avoid occlusions. The construction of an accurate representation of the environment is necessary for a proper match of sensor measurements to the map during localization. To prevent the necessity of a manual process to remove occluding obstacles and unobserved areas, a vehicle localization method that supports maps built from partial observations of the environment is proposed. In this localization system,maps are formed by curb and road markings extracted from multilayer laser sensor measurements. Curb structures are detected even in the presence of vehicles that occlude the roadsides, thanks to the use of robust regression. Road markings detector employs Otsu thresholding to analyze infrared remittance data which makes the method insensitive to illumination. Detected road features are stored in two map representations: occupancy grid map (OGM) and Gaussian process occupancy map (GPOM). The first approach is a popular map structure that represents the environment through fine-grained grids. The second approach is a continuous representation that can estimate the occupancy of unseen areas. The Monte Carlo localization (MCL) method was adapted to support the obtained maps of the urban environment. In this sense, vehicle localization was tested in an MCL that supports OGM and an MCL that supports GPOM. Precisely, for MCL based on GPOM, a new measurement likelihood based on multivariate normal probability density function is formulated. Experiments were performed in real urban environments. Maps were built using sparse laser data to verify there ronstruction of non-observed areas. The localization system was evaluated by comparing the results with a high precision GPS device. Results were also compared with localization based on OGM. / No contexto de veículos autônomos, a localização é um dos componentes fundamentais, pois possibilita tarefas como ultrapassagem, direção assistida e navegação autônoma. A presença de edifícios e o mau tempo interferem na recepção do sinal de GPS que consequentemente dificulta o uso de tal tecnologia para a localização de veículos dentro das cidades. Alternativamente, a localização com suporte aos mapas vem sendo empregada para estimar a posição sem a dependência do GPS. Nesta solução, a posição do veículo é dada pela região em que ocorre a melhor correspondência entre o mapa do ambiente e a leitura do sensor. Antes da criação dos mapas, características dos ambientes devem ser extraídas a partir das leituras dos sensores. Dessa forma, guias e sinalizações horizontais têm sido largamente utilizados para o mapeamento. Entretanto, métodos de mapeamento urbano geralmente necessitam de repetidas leituras do mesmo lugar para compensar as oclusões. A construção de representações precisas dos ambientes é essencial para uma adequada associação dos dados dos sensores como mapa durante a localização. De forma a evitar a necessidade de um processo manual para remover obstáculos que causam oclusão e áreas não observadas, propõe-se um método de localização de veículos com suporte aos mapas construídos a partir de observações parciais do ambiente. No sistema de localização proposto, os mapas são construídos a partir de guias e sinalizações horizontais extraídas a partir de leituras de um sensor multicamadas. As guias podem ser detectadas mesmo na presença de veículos que obstruem a percepção das ruas, por meio do uso de regressão robusta. Na detecção de sinalizações horizontais é empregado o método de limiarização por Otsu que analisa dados de reflexão infravermelho, o que torna o método insensível à variação de luminosidade. Dois tipos de mapas são empregados para a representação das guias e das sinalizações horizontais: mapa de grade de ocupação (OGM) e mapa de ocupação por processo Gaussiano (GPOM). O OGM é uma estrutura que representa o ambiente por meio de uma grade reticulada. OGPOM é uma representação contínua que possibilita a estimação de áreas não observadas. O método de localização por Monte Carlo (MCL) foi adaptado para suportar os mapas construídos. Dessa forma, a localização de veículos foi testada em MCL com suporte ao OGM e MCL com suporte ao GPOM. No caso do MCL baseado em GPOM, um novo modelo de verossimilhança baseado em função densidade probabilidade de distribuição multi-normal é proposto. Experimentos foram realizados em ambientes urbanos reais. Mapas do ambiente foram gerados a partir de dados de laser esparsos de forma a verificar a reconstrução de áreas não observadas. O sistema de localização foi avaliado por meio da comparação das posições estimadas comum GPS de alta precisão. Comparou-se também o MCL baseado em OGM com o MCL baseado em GPOM, de forma a verificar qual abordagem apresenta melhores resultados.
185

Métodos de Monte Carlo Hamiltoniano na inferência Bayesiana não-paramétrica de valores extremos / Monte Carlo Hamiltonian methods in non-parametric Bayesian inference of extreme values

Marcelo Hartmann 09 March 2015 (has links)
Neste trabalho propomos uma abordagem Bayesiana não-paramétrica para a modelagem de dados com comportamento extremo. Tratamos o parâmetro de locação μ da distribuição generalizada de valor extremo como uma função aleatória e assumimos um processo Gaussiano para tal função (Rasmussem & Williams 2006). Esta situação leva à intratabilidade analítica da distribuição a posteriori de alta dimensão. Para lidar com este problema fazemos uso do método Hamiltoniano de Monte Carlo em variedade Riemanniana que permite a simulação de valores da distribuição a posteriori com forma complexa e estrutura de correlação incomum (Calderhead & Girolami 2011). Além disso, propomos um modelo de série temporal autoregressivo de ordem p, assumindo a distribuição generalizada de valor extremo para o ruído e determinamos a respectiva matriz de informação de Fisher. No decorrer de todo o trabalho, estudamos a qualidade do algoritmo em suas variantes através de simulações computacionais e apresentamos vários exemplos com dados reais e simulados. / In this work we propose a Bayesian nonparametric approach for modeling extreme value data. We treat the location parameter μ of the generalized extreme value distribution as a random function following a Gaussian process model (Rasmussem & Williams 2006). This configuration leads to no closed-form expressions for the highdimensional posterior distribution. To tackle this problem we use the Riemannian Manifold Hamiltonian Monte Carlo algorithm which allows samples from the posterior distribution with complex form and non-usual correlation structure (Calderhead & Girolami 2011). Moreover, we propose an autoregressive time series model assuming the generalized extreme value distribution for the noise and obtained its Fisher information matrix. Throughout this work we employ some computational simulation studies to assess the performance of the algorithm in its variants and show many examples with simulated and real data-sets.
186

Remaining useful life estimation of critical components based on Bayesian Approaches. / Prédiction de l'état de santé des composants critiques à l'aide de l'approche Bayesienne

Mosallam, Ahmed 18 December 2014 (has links)
La construction de modèles de pronostic nécessite la compréhension du processus de dégradation des composants critiques surveillés afin d’estimer correctement leurs durées de fonctionnement avant défaillance. Un processus de d´dégradation peut être modélisé en utilisant des modèles de Connaissance issus des lois de la physique. Cependant, cette approche n´nécessite des compétences Pluridisciplinaires et des moyens expérimentaux importants pour la validation des modèles générés, ce qui n’est pas toujours facile à mettre en place en pratique. Une des alternatives consiste à apprendre le modèle de dégradation à partir de données issues de capteurs installés sur le système. On parle alors d’approche guidée par des données. Dans cette thèse, nous proposons une approche de pronostic guidée par des données. Elle vise à estimer à tout instant l’état de santé du composant physique et prédire sa durée de fonctionnement avant défaillance. Cette approche repose sur deux phases, une phase hors ligne et une phase en ligne. Dans la phase hors ligne, on cherche à sélectionner, parmi l’ensemble des signaux fournis par les capteurs, ceux qui contiennent le plus d’information sur la dégradation. Cela est réalisé en utilisant un algorithme de sélection non supervisé développé dans la thèse. Ensuite, les signaux sélectionnés sont utilisés pour construire différents indicateurs de santé représentant les différents historiques de données (un historique par composant). Dans la phase en ligne, l’approche développée permet d’estimer l’état de santé du composant test en faisant appel au filtre Bayésien discret. Elle permet également de calculer la durée de fonctionnement avant défaillance du composant en utilisant le classifieur k-plus proches voisins (k-NN) et le processus de Gauss pour la régression. La durée de fonctionnement avant défaillance est alors obtenue en comparant l’indicateur de santé courant aux indicateurs de santé appris hors ligne. L’approche développée à été vérifiée sur des données expérimentales issues de la plateforme PRO-NOSTIA sur les roulements ainsi que sur des données fournies par le Prognostic Center of Excellence de la NASA sur les batteries et les turboréacteurs. / Constructing prognostics models rely upon understanding the degradation process of the monitoredcritical components to correctly estimate the remaining useful life (RUL). Traditionally, a degradationprocess is represented in the form of physical or experts models. Such models require extensiveexperimentation and verification that are not always feasible in practice. Another approach that buildsup knowledge about the system degradation over time from component sensor data is known as datadriven. Data driven models require that sufficient historical data have been collected.In this work, a two phases data driven method for RUL prediction is presented. In the offline phase, theproposed method builds on finding variables that contain information about the degradation behaviorusing unsupervised variable selection method. Different health indicators (HI) are constructed fromthe selected variables, which represent the degradation as a function of time, and saved in the offlinedatabase as reference models. In the online phase, the method estimates the degradation state usingdiscrete Bayesian filter. The method finally finds the most similar offline health indicator, to the onlineone, using k-nearest neighbors (k-NN) classifier and Gaussian process regression (GPR) to use it asa RUL estimator. The method is verified using PRONOSTIA bearing as well as battery and turbofanengine degradation data acquired from NASA data repository. The results show the effectiveness ofthe method in predicting the RUL.
187

Image Segmentation, Parametric Study, and Supervised Surrogate Modeling of Image-based Computational Fluid Dynamics

Islam, Md Mahfuzul 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With the recent advancement of computation and imaging technology, Image-based computational fluid dynamics (ICFD) has emerged as a great non-invasive capability to study biomedical flows. These modern technologies increase the potential of computation-aided diagnostics and therapeutics in a patient-specific environment. I studied three components of this image-based computational fluid dynamics process in this work. To ensure accurate medical assessment, realistic computational analysis is needed, for which patient-specific image segmentation of the diseased vessel is of paramount importance. In this work, image segmentation of several human arteries, veins, capillaries, and organs was conducted to use them for further hemodynamic simulations. To accomplish these, several open-source and commercial software packages were implemented. This study incorporates a new computational platform, called InVascular, to quantify the 4D velocity field in image-based pulsatile flows using the Volumetric Lattice Boltzmann Method (VLBM). We also conducted several parametric studies on an idealized case of a 3-D pipe with the dimensions of a human renal artery. We investigated the relationship between stenosis severity and Resistive index (RI). We also explored how pulsatile parameters like heart rate or pulsatile pressure gradient affect RI. As the process of ICFD analysis is based on imaging and other hemodynamic data, it is often time-consuming due to the extensive data processing time. For clinicians to make fast medical decisions regarding their patients, we need rapid and accurate ICFD results. To achieve that, we also developed surrogate models to show the potential of supervised machine learning methods in constructing efficient and precise surrogate models for Hagen-Poiseuille and Womersley flows.
188

Mission-based Design Space Exploration and Traffic-in-the-Loop Simulation for a Range-Extended Plug-in Hybrid Delivery Vehicle

Anil, Vijay Sankar January 2020 (has links)
No description available.
189

ACCELERATING COMPOSITE ADDITIVE MANUFACTURING SIMULATIONS: A STATISTICAL PERSPECTIVE

Akshay Jacob Thomas (7026218) 04 August 2023 (has links)
<p>Extrusion Deposition Additive Manufacturing is a process by which short fiber-reinforced polymers are extruded in a screw and deposited onto a build platform using a set of instructions specified in the form of a machine code. The highly non-isothermal process can lead to undesired effects in the form of residual deformation and part delamination. Process simulations that can predict residual deformation and part delamination have been a thrust area of research to prevent the repeated trial and error process before a useful part has been produced. However, populating the material properties required for the process simulations require extensive characterization efforts. Tackling this experimental bottleneck is the focus of the first half of this research.</p><p>The first contribution is a method to infer the fiber orientation state from only tensile tests. While measuring fiber orientation state using computed tomography and optical microscopy is possible, they are often time-consuming, and limited to measuring fibers with circular cross-sections. The knowledge of the fiber orientation is extremely useful in populating material properties using micromechanics models. To that end, two methods to infer the fiber orientation state are proposed. The first is Bayesian methodology which accounts for aleatoric and epistemic uncertainty. The second method is a deterministic method that returns an average value of the fiber orientation state and polymer properties. The inferred orientation state is validated by performing process simulations using material properties populated using the inferred orientation state. A different challenge arises when dealing with multiple extrusion systems. Considering even the same material printed on different extrusion systems requires an engineer to redo the material characterization efforts (due to changes in microstructure). This, in turn, makes characterization efforts expensive and time-consuming. Therefore, the objective of the second contribution is to address this experimental bottleneck and use prior information about the material manufactured in one extrusion system to predict its properties when manufactured in another system. A framework that can transfer thermal conductivity data while accounting for uncertainties arising from different sources is presented. The predicted properties are compared to experimental measurements and are found to be in good agreement.</p><p>While the process simulations using finite element methods provide a reliable framework for the prediction of residual deformation and part delamination, they are often computationally expensive. Tackling the fundamental challenges regarding this computational bottleneck is the focus of the second half of this dissertation. To that end, as the third contribution, a neural network based solver is developed that can solve parametric partial differential equations. This is attained by deriving the weak form of the governing partial differential equation. Using this variational form, a novel loss function is proposed that does not require the evaluation of the integrals arising out of the weak form using Gauss quadrature methods. Rather, the integrals are identified to be expectation values for which an unbiased estimator is developed. The method is tested for parabolic and elliptical partial differential equations and the results compare well with conventional solvers. Finally, the fourth contribution of this dissertation involves using the new solver to solve heat transfer problems in additive manufacturing, without the need for discretizing the time domain. A neural network is used to solve the governing equations in the evolving geometry. The weak form based loss is altered to account for the evolving geometry by using a novel sequential collocation sampling method. This work forms the foundational work to solve parametric problems in additive manufacturing.</p>
190

INVESTIGATING DAMAGE IN SHORT FIBER REINFORCED COMPOSITES

Ronald F Agyei (11201085) 29 July 2021 (has links)
<div>In contrast to traditional steel and aluminum, short fiber reinforced polymer composites (SFRCs) provide promising alternatives in material selection for automotive and aerospace applications due to their potential to decrease weight while maintaining excellent mechanical properties. However, uncertainties about the influence of complex microstructures and defects on mechanical response have prevented widespread adoption of material models for</div><div>SFRCs. In order to build confidence in models’ predictions requires deepened insight into the heterogenous damage mechanisms. Therefore, this research takes a micro-mechanics standpoint of assessing the damage behavior of SFRCs, particularly micro-void nucleation at the fiber tips, by passing information of microstructural attributes within neighborhoods of incipient damage and non-damage sites, into a framework that establishes correlations between the microstructural information and damage. To achieve this, in-situ x-ray tomography of the gauge sections of two cylindrical injection molded dog-bone specimens, composed of E-glass fibers in a polypropylene matrix, was conducted while the specimens were monotonically loaded until failure. This was followed by (i) the development of microstructural characterization frameworks for segmenting fiber and porosity features in 3D images, (ii) the development of a digital volume correlation informed damage detection framework that confines search spaces of potential damage sites, and (iii) the use of a Gaussian process classification framework to explore the dependency of micro-void nucleation on neighboring microstructural defects by ranking each of their contributions. Specifically, the analysis considered microstructural metrics related to the closest fiber, the closest pore, and the local stiffness, and the results demonstrated that less stiff resin rich areas were more relevant for micro-void nucleation than clustered fiber tips, T-intersections of fibers, or varying porosity volumes. This analysis provides a ranking of microstructural metrics that induce microvoid nucleation, which can be helpful for modelers to validate their predictions on proclivity of damage initiation in the presence of wide distributions of microstructural features and</div><div>manufacturing defects. </div>

Page generated in 0.4501 seconds