Spelling suggestions: "subject:"computeraided engineering"" "subject:"computeraided ingineering""
111 |
QUANTIFICATION OF PAPILLARY MUSCLE MOTION AND MITRAL REGURGITATION AFTER MYOCARDIAL INFARCTIONFerguson, Connor R. 01 January 2019 (has links)
Change in papillary muscle motion as a result of left ventricular (LV) remodeling after posterolateral myocardial infarction is thought to contribute to ischemic mitral regurgitation. A finite element (FE) model of the LV was created from magnetic resonance images acquired immediately before myocardial infarction and 8 weeks later in a cohort of 12 sheep. Severity of mitral regurgitation was rated by two-dimensional echocardiography and regurgitant volume was estimated using MRI. Of the cohort, 6 animals (DC) received hydrogel injection therapy shown to limit ventricular remodeling after myocardial infarction while the control group (MI) received a similar pattern of saline injections. LV pressure was determined by direct invasive measurement and volume was estimated from MRI. FE models of the LV for each animal included both healthy and infarct tissue regions as well as a simulated hydrogel injection pattern for the DC group. Constitutive model material parameters for each region in the FE model were assigned based on results from previous research. Invasive LV pressure measurements at end diastole and end systole were used as boundary conditions to drive model simulations for each animal. Passive stiffness (C) and active material parameter (Tmax) were adjusted to match MRI estimations of LV volume at end systole and end diastole. Nodal positions of the chordae tendineae (CT) were determined by measurements obtained from the excised heart of each animal at the terminal timepoint. Changes in CT nodal displacements between end systole and end diastole at 0 and 8-week timepoints were used to investigate the potential contribution of changes in papillary muscle motion to the progression of ischemic mitral regurgitation after myocardial infarction. Nodal displacements were broken down into radial, circumferential, and longitudinal components relative to the anatomy of the individual animal model. Model results highlighted an outward radial movement in the infarct region after 8 weeks in untreated animals, while radial direction of motion observed in the treated animal group was preserved relative to baseline. Circumferential displacement decreased in the remote region in the untreated animal group after 8 weeks but was preserved relative to baseline in the treated animal group. MRI estimates of regurgitant volume increased significantly in the untreated animal group after 8 weeks but did not increase in the treated group. The results of this analysis suggest that hydrogel injection treatment may serve to limit changes in papillary muscle motion and severity of mitral regurgitation after posterolateral myocardial infarction.
|
112 |
An Adaptive Nonparametric Modeling Technique for Expanded Condition Monitoring of ProcessesHumberstone, Matthew John 01 May 2010 (has links)
New reactor designs and the license extensions of the current reactors has created new condition monitoring challenges. A major challenge is the creation of a data-based model for a reactor that has never been built or operated and has no historical data. This is the motivation behind the creation of a hybrid modeling technique based on first principle models that adapts to include operating reactor data as it becomes available.
An Adaptive Non-Parametric Model (ANPM) was developed for adaptive monitoring of small to medium size reactors (SMR) but would be applicable to all designs. Ideally, an adaptive model should have the ability to adapt to new operational conditions while maintaining the ability to differentiate faults from nominal conditions. This has been achieved by focusing on two main abilities. The first ability is to adjust the model to adapt from simulated conditions to actual operating conditions, and the second ability is to adapt to expanded operating conditions. In each case the system will not learn new conditions which represent faulted or degraded operations. The ANPM architecture is used to adapt the model's memory matrix from data from a First Principle Model (FPM) to data from actual system operation. This produces a more accurate model with the capability to adjust to system fluctuations.
This newly developed adaptive modeling technique was tested with two pilot applications. The first application was a heat exchanger model that was simulated in both a low and high fidelity method in SIMULINK. The ANPM was applied to the heat exchanger and improved the monitoring performance over a first principle model by increasing the model accuracy from an average MSE of 0.1451 to 0.0028 over the range of operation. The second pilot application was a flow loop built at the University of Tennessee and simulated in SIMULINK. An improvement in monitoring system performance was observed with the accuracy of the model improving from an average MSE of 0.302 to an MSE of 0.013 over the adaptation range of operation. This research focused on the theory, development, and testing of the ANPM and the corresponding elements in the surveillance system.
|
113 |
Proposition d'une Méthode pour l'Ingénierie de l'Alignement Métier/SI: Diagnostic, Évolution, Alternatives technologiques et DécisionGmati, Islem 13 December 2011 (has links) (PDF)
L'intérêt de l'alignement est largement reconnu aussi bien par le monde académique qu'industriel. Néanmoins, l'alignement ne peut pas être considéré comme une fin en soi si le métier est en perpétuelle évolution. Le challenge est donc de préserver la relation d'alignement après la mise en œuvre du changement. On parle d'évolution de l'alignement ou de co-évolution. Le défi de la co-évolution est encore plus grand quand on considère le grand nombre de systèmes hérités dont le cycle d'évolution n'est pas le même que celui des exigences métier. Le challenge est donc de préserver la relation d'alignement tout en trouvant le scénario d'évolution le mieux adapté à la situation de l'organisation et de son système d'information. La méthode DEEVA proposée s'inscrit dans ce contexte et explore deux disciplines à savoir " l'ingénierie des exigences " et " l'ingénierie de logiciels " et tire avantage de leur interaction afin de s'attaquer à un problème complexe tel que celui de l'ingénierie de l'alignement. La méthode DEEVA s'inscrit dans un contexte de changement qui mène l'organisation d'une situation existante vers une situation future et l'enrichit avec le processus de co-évolution constitué de trois étapes : (i) le diagnostic de l'alignement en se basant sur une modélisation explicite des liens d'alignement entre les perspectives stratégique et opérationnelle. Ce diagnostic permet de capturer le non alignement, (ii) la capture et la spécification du changement ainsi que l'anticipation des évolutions requises et la simulation bien en amont de leurs impacts et (iii) la décision parmi une panoplie de choix TI pour mettre en œuvre le changement de manière à préserver la relation d'alignement. La méthode DEEVA apporte des éléments de solution à un problème complexe tel que celui de l'ingénierie de l'alignement en proposant et utilisant un ensemble de classifications. Ces classification permettent de décomposer les problèmes liés à la capture du non alignement et la spécification des exigences d'évolution ainsi que leur conduite jusqu'au choix du scénario technique de leur implémentation. Ceci a permis d'avancer dans le raisonnement et de progresser vers la résolution d'un problème complexe et touchant à plusieurs axes. Le processus DEEVA se base sur un ensemble de directives guidant les ingénieurs dans l'exercice de la co-évolution. Ces directives sont à peu près 30% interactives et 70% algorithmiques et se basent sur un ensemble de règles et de techniques qui renforcent le guidage et rendent l'approche plus systématique. Ces règles se basent sur les différentes classifications proposées. Cette recherche a été validée par l'application de la méthode proposée à un projet complexe de taille réelle concernant la transformation et la refonte du SI de l'enseigne textile du Groupement des Mousquetaires qui a financé cette thèse.
|
114 |
Structural condition monitoring and damage identification with artificial neural networkBakhary, Norhisham January 2009 (has links)
Many methods have been developed and studied to detect damage through the change of dynamic response of a structure. Due to its capability to recognize pattern and to correlate non-linear and non-unique problem, Artificial Neural Networks (ANN) have received increasing attention for use in detecting damage in structures based on vibration modal parameters. Most successful works reported in the application of ANN for damage detection are limited to numerical examples and small controlled experimental examples only. This is because of the two main constraints for its practical application in detecting damage in real structures. They are: 1) the inevitable existence of uncertainties in vibration measurement data and finite element modeling of the structure, which may lead to erroneous prediction of structural conditions; and 2) enormous computational effort required to reliably train an ANN model when it involves structures with many degrees of freedom. Therefore, most applications of ANN in damage detection are limited to structure systems with a small number of degrees of freedom and quite significant damage levels. In this thesis, a probabilistic ANN model is proposed to include into consideration the uncertainties in finite element model and measured data. Rossenblueth's point estimate method is used to reduce the calculations in training and testing the probabilistic ANN model. The accuracy of the probabilistic model is verified by Monte Carlo simulations. Using the probabilistic ANN model, the statistics of the stiffness parameters can be predicted which are used to calculate the probability of damage existence (PDE) in each structural member. The reliability and efficiency of this method is demonstrated using both numerical and experimental examples. In addition, a parametric study is carried out to investigate the sensitivity of the proposed method to different damage levels and to different uncertainty levels. As an ANN model requires enormous computational effort in training the ANN model when the number of degrees of freedom is relatively large, a substructuring approach employing multi-stage ANN is proposed to tackle the problem. Through this method, a structure is divided to several substructures and each substructure is assessed separately with independently trained ANN model for the substructure. Once the damaged substructures are identified, second-stage ANN models are trained for these substructures to identify the damage locations and severities of the structural ii element in the substructures. Both the numerical and experimental examples are used to demonstrate the probabilistic multi-stage ANN methods. It is found that this substructuring ANN approach greatly reduces the computational effort while increasing the damage detectability because fine element mesh can be used. It is also found that the probabilistic model gives better damage identification than the deterministic approach. A sensitivity analysis is also conducted to investigate the effect of substructure size, support condition and different uncertainty levels on the damage detectability of the proposed method. The results demonstrated that the detectibility level of the proposed method is independent of the structure type, but dependent on the boundary condition, substructure size and uncertainty level.
|
115 |
Feature technology and its applications in computer integrated manufacturingDing, Lian January 2003 (has links)
Computer aided design and manufacturing (CAD/CAM) has been a focal research area for the manufacturing industry. Genuine CAD/CAM integration is necessary to make products of higher quality with lower cost and shorter lead times. Although CAD and CAM have been extensively used in industry, effective CAD/CAM integration has not been implemented. The major obstacles of CAD/CAM integration are the representation of design and process knowledge and the adaptive ability of computer aided process planning (CAPP). This research is aimed to develop a feature-based CAD/CAM integration methodology. Artificial intelligent techniques such as neural networks, heuristic algorithms, genetic algorithms and fuzzy logics are used to tackle problems. The activities considered include: 1) Component design based on a number of standard feature classes with validity check. A feature classification for machining application is defined adopting ISO 10303-STEP AP224 from a multi-viewpoint of design and manufacture. 2) Search of interacting features and identification of features relationships. A heuristic algorithm has been proposed in order to resolve interacting features. The algorithm analyses the interacting entity between each feature pair, making the process simpler and more efficient. 3) Recognition of new features formed by interacting features. A novel neural network-based technique for feature recognition has been designed, which solves the problems of ambiguity and overlaps. 4) Production of a feature based model for the component. 5) Generation of a suitable process plan covering selection of machining operations, grouping of machining operations and process sequencing. A hybrid feature-based CAPP has been developed using neural network, genetic algorithm and fuzzy evaluating techniques.
|
116 |
A Novel Hip Implant Using 3D Woven Composite Material – Design and AnalysisAdluru, Hari Kishore 02 November 2015 (has links)
The present research focuses on analyzing the possibility of implementing three dimensional woven composite (3DWC) materials in hip implants. The integration of 3DWCs in hip implants has the possibility to both extend the life-time and improve patient outcomes; by spatially varying mechanical properties to meet both biological needs as well as required mechanical loading. In this study, the bulk material properties of 3DWCs were varied based on woven composite architecture and determined using physics based models, which reflect the realistic geometries of fibers in compaction and preform. The multi-digital chain method combined with Extended Finite Elemental Analysis (XFEA) are adopted in this micro-analysis for composite design. Four different woven architectures with a combination of different existing biocompatible fiber and resins are considered in this study. The main objective is to assess the mechanical response of these biocompatible materials in the design of 3D woven architectures and determine their ability to match the required modulus at different regions of a hip implant. Results obtained show 3DWCs are viable candidates for this application. Multiple architectures and materials chosen, were able to achieve the desired mechanical response. Additional studies can use these results as a starting point and framework for further mechanical and biological testing.
|
117 |
Development Of CAE-based Methodologies For Designing Head Impact Safety CountermeasuresBiswas, Umesh Chandra 09 1900 (has links) (PDF)
No description available.
|
118 |
Modeling Dynamic Stall for a Free Vortex Wake Model of a Floating Offshore Wind TurbineGaertner, Evan M 07 November 2014 (has links)
Floating offshore wind turbines in deep waters offer significant advantages to onshore and near-shore wind turbines. However, due to the motion of floating platforms in response to wind and wave loading, the aerodynamics are substantially more complex. Traditional aerodynamic models and design codes do not adequately account for the floating platform dynamics to assess its effect on turbine loads and performance. Turbines must therefore be over designed due to loading uncertainty and are not fully optimized for their operating conditions. Previous research at the University of Massachusetts, Amherst developed the Wake Induced Dynamics Simulator, or WInDS, a free vortex wake model of wind turbines that explicitly includes the velocity components from platform motion. WInDS rigorously accounts for the unsteady interactions between the wind turbine rotor and its wake, however, as a potential flow model, the unsteady viscous response in the blade boundary layer is neglected. To address this concern, this thesis presents the development of a Leishman-Beddoes dynamic stall model integrated into WInDS. The stand-alone dynamic stall model was validated against two-dimensional unsteady data from the OSU pitch oscillation experiments and the coupled WInDS model was validated against three-dimensional data from NREL’s UAE Phase VI campaign. WInDS with dynamic stall shows substantial improvements in load predictions for both steady and unsteady conditions over the base version of WInDS. Furthermore, use of WInDS with the dynamic stall model should provide the necessary aerodynamic model fidelity for future research and design work on floating offshore wind turbines.
|
119 |
Calculation of Scalar Isosurface Area and ApplicationsShete, Kedar Prashant 29 October 2019 (has links)
The problem of calculating iso-surface statistics in turbulent flows is interesting for a number of reasons, some of them being combustion modeling, entrainment through turbulent/non-turbulent interfaces, calculating mass flux through iso-scalar surfaces and mapping of scalar fields. A fundamental effect of fuid turbulence is to wrinkle scalar iso-surfaces. A review of the literature shows that iso-surface calculations have primarily been done with geometric methods, which have challenges when used to calculate surfaces that have high complexity, such as in turbulent flows. In this thesis, we propose an alternative integral method and test it against analytical solutions. We present a parallelized algorithm and code to enable in-simulation calculation of isosurface area. We then use this code to calculate area statistics for data obtained from Direct Numerical Simulations and make predictions about the variation of the iso-scalar surface area with Taylor Peclet numbers between 9.8 and 4429 and Taylor Reynolds numbers between 98 and 633.
|
120 |
Benchmarking, Characterization and Tuning of Shell EcoMarathon Prototype PowertrainGriess, Eric J 01 March 2015 (has links)
With the automotive industry ever striving to push the limits of fuel efficiency, the Shell EcoMarathon offers a glimpse into this energy conserving mindset by challenging engineering students around the world to design and build ultra-efficient vehicles to compete regionally. This requires synchronization of engineering fields to ensure that the vehicle and powertrain system work in parallel to achieve similar goals.
The goal for Cal Poly – San Luis Obispo’s EcoMarathon vehicle for the 2015 competition is to analyze the unique operating mode that the powertrain undergoes during competition and improve their current package to increase fuel efficiency. In this study, fuel delivery, ignition timing and engine temperature are experimentally varied to observe trends in steady state fuel consumption. A developmental simulation is then implemented with these trends to analyze potential differences in transient and steady state tuning targets. The engine is then tuned to finalized tuning targets and performance compared with benchmark values.
|
Page generated in 0.1078 seconds