Spelling suggestions: "subject:"updating"" "subject:"updatings""
71 |
Damage detection using angular velocityAl Jailawi, Samer Saadi Hussein 01 December 2018 (has links)
The present work introduces novel methodologies for damage detection and health monitoring of structural and mechanical systems. The new approach uses the angular velocity inside different mathematical forms, via a gyroscope, to detect, locate, and relatively quantify damage. This new approach has been shown to outperform the current state-of-the-art acceleration-based approach in detecting damage on structures. Additionally, the current approach has been shown to be less sensitive to environmental acoustic noises, which present major challenges to the acceleration-based approaches. Furthermore, the current approach has been demonstrated to work effectively on arch structures, which acceleration-based approaches have struggled to deal with. The efficacy of the new approach has been investigated through multiple forms of structural damage indices.
The first methodology proposed a damage index that is based on the changes in the second spatial derivative (curvature) of the power spectral density (PSD) of the angular velocity during vibration. The proposed method is based on the output motion only and does not require information about the input forces/motions. The PSD of the angular velocity signal at different locations on structural beams was used to identify the frequencies where the beams show large magnitude of angular velocity. The curvature of the PSD of the angular velocity at these peak frequencies was then calculated. A damage index is presented that measures the differences between the PSD curvature of the angular velocity of a damaged structure and an artificial healthy baseline structure.
The second methodology proposed a damage index that is used to detect and locate damage on straight and curved beams. The approach introduces the transmissibility and coherence functions of the output angular velocity between two points on a structure where damage may occur to calculate a damage index as a metric of the changes in the dynamic integrity of the structure. The damage index considers limited-frequency bands of the transmissibility function at frequencies where the coherence is high. The efficacy of the proposed angular-velocity damage-detection approach as compared to the traditional linear-acceleration damage-detection approach was tested on straight and curved beams with different chord heights. Numerical results showed the effectiveness of the angular-velocity approach in detecting damage of multiple levels. It was observed that the magnitude of the damage index increased with the magnitude of damage, indicating the sensitivity of the proposed method to damage intensity. The results on straight and curved beams showed that the proposed approach is superior to the linear-acceleration-based approach, especially when dealing with curved beams with increasing chord heights. The experimental results showed that the damage index of the angular-velocity approach outweighed that of the acceleration approach by multiple levels in terms of detecting damage.
A third methodology for health-monitoring and updating of structure supports, which resemble bridges’ bearings, is introduced in this work. The proposed method models the resistance of the supports as rotational springs and uses the transmissibility and coherence functions of the output response of the angular velocity in the neighborhood of the bearings to detect changes in the support conditions. The proposed methodology generates a health-monitoring index that evaluates the level of deterioration in the support and a support-updating scheme to update the stiffness resistance of the supports. Numerical and experimental examples using beams with different support conditions are introduced to demonstrate the effectiveness of the proposed method. The results show that the proposed method detected changes in the state of the bearings and successfully updated the changes in the stiffness of the supports.
|
72 |
Sparsity Constrained Inverse Problems - Application to Vibration-based Structural Health MonitoringSmith, Chandler B 01 January 2019 (has links)
Vibration-based structural health monitoring (SHM) seeks to detect, quantify, locate, and prognosticate damage by processing vibration signals measured while the structure is operational. The basic premise of vibration-based SHM is that damage will affect the stiffness, mass or energy dissipation properties of the structure and in turn alter its measured dynamic characteristics. In order to make SHM a practical technology it is necessary to perform damage assessment using only a minimum number of permanently installed sensors. Deducing damage at unmeasured regions of the structural domain requires solving an inverse problem that is underdetermined and(or) ill-conditioned. In addition, the effects of local damage on global vibration response may be overshadowed by the effects of modelling error, environmental changes, sensor noise, and unmeasured excitation. These theoretical and practical challenges render the damage identification inverse problem ill-posed, and in some cases unsolvable with conventional inverse methods.
This dissertation proposes and tests a novel interpretation of the damage identification inverse problem. Since damage is inherently local and strictly reduces stiffness and(or) mass, the underdetermined inverse problem can be made uniquely solvable by either imposing sparsity or non-negativity on the solution space. The goal of this research is to leverage this concept in order to prove that damage identification can be performed in practical applications using significantly less measurements than conventional inverse methods require. This dissertation investigates two sparsity inducing methods, L1-norm optimization and the non-negative least squares, in their application to identifying damage from eigenvalues, a minimal sensor-based feature that results in an underdetermined inverse problem. This work presents necessary conditions for solution uniqueness and a method to quantify the bounds on the non-unique solution space. The proposed methods are investigated using a wide range of numerical simulations and validated using a four-story lab-scale frame and a full-scale 17 m long aluminum truss. The findings of this study suggest that leveraging the attributes of both L1-norm optimization and non-negative constrained least squares can provide significant improvement over their standalone applications and over other existing methods of damage detection.
|
73 |
The hippocampus and semantic memory beyond acquisition: a lesion study of hippocampal contributions to the maintenance, updating, and use of remote semantic memoryKlooster, Nathaniel Bloem 01 May 2016 (has links)
Semantic memory includes vocabulary and word meanings, conceptual information, and general facts about the world (Tulving, 1972). According to the standard view of semantic memory in cognitive neuroscience, the hippocampus is necessary to first acquire new semantic information (Gabrieli, Cohen, & Corkin, 1988), but these representations are then consolidated in the neocortex and become independent of the hippocampus with time (McClelland, McNaughton, & O'Reilly, 1995). Remote semantic memory is considered independent of the hippocampus, and the hippocampus is not thought to play a critical role in the processing and use of such representations.
The current work challenges the notion that previously acquired semantic knowledge, and its use during communication, is independent of the hippocampus. A group of patients with bilateral hippocampal damage and severe impairments in declarative memory were tested. Intact naming and word-definition matching performance in amnesia, has led to the notion that remote semantic memory is intact in patients with hippocampal amnesia. Motivated by perspectives of word learning as a protracted process where additional features and senses of a word are added over time, and by recent discoveries about the time course of hippocampal contributions to on-line relational processing, reconsolidation, and the flexible integration of information, we revisit the notion that remote semantic memory is intact in amnesia. Using measures of semantic richness and vocabulary depth from psycholinguistics and first and second language-learning studies, we examined how much information is associated with previously acquired, highly familiar words in hippocampal amnesic patients. Relative to healthy demographically matched comparison participants and a group of brain-damaged comparison participants, the patients with hippocampal amnesia performed significantly worse on both productive and receptive measures of vocabulary depth and semantic richness. In the healthy brain, semantic memory appears to get richer and deeper with time. Healthy participants of all ages were tested on these measures and strong correlations are seen with age as older healthy adults displayed richer semantic knowledge than the younger adults. The patient data provides a mechanism: hippocampal relational binding supports the deepening and enrichment of knowledge over time. These findings suggest that remote semantic memory is impoverished in patients with hippocampal amnesia and that the hippocampus supports the maintenance and updating of semantic memory beyond its initial acquisition.
The use of lexical and semantic knowledge during discourse was also examined. Amnesic patients displayed significantly lower levels of lexical diversity in the speech they produced, and showed a strong trend toward producing language with reduced levels of semantic detail suggesting that patients cannot use their semantic representations as richly during communication. These results add to a growing body of work detailing a role for the hippocampus in language processing more generally.
By documenting a role for the hippocampus in maintaining, updating, and using semantic knowledge, this work informs theories of semantic memory and it's neural bases, advances knowledge of the role of the hippocampus in supporting human behavior, and brings more sensitive measures to the neuroscientific study of semantic memory.
|
74 |
Damage Detection and Fault Diagnosis in Mechanical Systems using Vibration SignalsNguyen, Viet Ha 29 October 2010 (has links)
Cette thèse a pour but didentifier de changements dans le comportement dynamique dun système mécanique par le développement des techniques didentification, de détection et de recalage de modèle. Un endommagement ou une activation de non - linéarité est considéré responsable du changement.
Une bien connue classification pour la détection dendommagement dans la littérature est utilisée dans la thèse, qui définit quatre niveaux dans lordre croissant de complexité:
- Niveau 1 : Détection dendommagement : inspection de la présence dendommagement dans la structure.
- Niveau 2 : Localisation dendommagement : détermination de position de lendommagement.
- Niveau 3 : Evaluation de la gravité de lendommagement.
- Niveau 4 : Prévision de la durée restant de vie de la structure.
Selon la classification décrite ci-dessus, le problème de diagnostic dans cette thèse est abordé pour les trois premiers niveaux, i.e. détection, localisation et évaluation. Lidentification dendommagements ou dactivation de non - linéarité est basée sur une comparaison entre un état actuel et létat de référence (en normal condition).
La thèse est organisée comme suit :
Chapitre 1 présente une étude bibliographique sur des méthodes dindentification modale et de détection. Cette partie décrit quelques caractéristiques principales de systèmes non - linéaires et également des défis que présente la non - linéarité. Les problèmes de localisation et dévaluation sont discutés ensuite.
Chapitres 2, 3 et 4 se concentrent sur la détection de défaut, par exemple lactivation de non linéarité ou lapparition dendommagement par trois méthodes respectivement : la Transformée en ondelettes (Wavelet transform), la Séparation aveugle au second ordre (Second-Order Blind Identification) et lAnalyse en composantes principales à noyau (Kernel Principal component Analysis). Seuls des signaux de sortie sont utilisés pour le traitement. Les deux premières méthodes réalisent la surveillance structurale par un processus didentification modale, tandis que la dernière méthode exerce directement dans des espaces caractéristiques déterminés par une fonction de noyau choisie. La détection peut être réalisée au moyen du concept dangle entre sous-espaces ou basée sur des statistiques.
La robustesse des méthodes est illustrée sur une structure de poutre avec une non -linéarité géométrique à un bout ; ce modèle a été étudié dans le cadre dun projet de recherche européenne COST F3. Des autres exemples sont également considérés, tels quune maquette davion avec différents niveaux dendommagement et deux applications industrielles avec le but de contrôle de qualité sur un set dappareils électro - mécaniques et sur des joints de soudure.
Chapitre 5 vise à la localisation dendommagement basée sur lanalyse de sensibilité des résultats de lAnalyse en composantes principales dans le domaine fréquentiel. La localisation est exécutée par la comparaison des sensibilités de composantes principales entre létat de référence (saine) et un état endommagé. Seules les réponses mesurées, e.g. réponses en fréquence (FRFs) sont nécessaires pour cet objectif.
Après lanalyse de sensibilité au Chapitre 5, Chapitre 6 sadresse à lévaluation de paramètres, par exemple estimation dendommagements. Pour ce but, une procédure de recalage de modèle est exécutée. Cette procédure demande de construire un modèle analytique de la structure.
Lanalyse de sensibilité pour la localisation dendommagement est illustrée par des données numériques et expérimentales dans des systèmes de masse - ressort et dans des structures de poutre. Une réelle structure, i.e. le pont I-40 à New Mexico qui a été détruite en 1993 est aussi examinée.
Enfin, des conclusions sont retirées basées sur le travail réalisé et quelques perspectives sont proposées pour la continuation de cette recherche.
|
75 |
Interface update from older adult users’ perspectiveCantar, Andreia, Åström, Eri January 2013 (has links)
Is it unavoidable fact that the interface of a program will change when the program is updated. It is a well-known problem that such changes lead to usability issues, even if the new interface in itself is usable. In increasingly digitalized society where using computers and the Internet is no longer a matter of interest, but a necessity to manage everyday life, it is important that older generation is included in the rapid development. Older adults generally suffer from physical, motor and cognitive decline that can create barrier to using computers. Changing interface can be particularly problematic for this age group, and a smooth transition from the old interface to the new one is needed. Fifteen older and five younger computer users were recruited, to study how a drastically modified computer interface influences older adults as computer users. Internet Explorer 10 for Windows 8 was used as testing software for the case study where the participants were asked to conduct a series of tasks to observe the effects of first time experience with the new interface. The attitudes and the emotions towards the new interface, as well as the difficulties encountered during the first time use were studied in the thesis. The result showed the clear difference between the younger and older participants. Older participants generally had a more positive attitude towards the new browser, even though they encountered more difficulties during the test. The younger participants managed to complete the tasks with less assistance, but were skeptical towards the new interface. Despite the differences in the emotional reactions, both groups were reluctant to update to the new interface, which was shown to be particularly problematic for older participants. The result of the study indicates that an interface that undergoes major restructuring is most likely to be problematic for senior computer users. Thus, there is a need for a bridging strategy between the old and the new interface.
|
76 |
A Time-Variant Probabilistic Model for Predicting the Longer-Term Performance of GFRP Reinforcing Bars Embedded in ConcreteKim, Jeongjoo 2010 May 1900 (has links)
Although Glass Fiber Reinforced Polymer (GFRP) has many potential advantages as reinforcement in concrete structures, the loss in tensile strength of the GFRP reinforcing bar can be significant when exposed to the high alkali environments. Much effort was made to estimate the durability performance of GFRP in concrete; however, it is widely believed the data from accelerated aging tests is not appropriate to predict the longer-term performance of GFRP reinforcing bars. The lack of validated long-term data is the major obstacle for broad application of GFRP reinforcement in civil engineering practices. The main purpose of this study is to evaluate the longer-term deterioration rate of GFRP bars embedded in concrete, and to develop an accurate model that can provide better information to predict the longer-term performance of GFRP bars. In previous studies performed by Trejo, three GFRP bar types (V1, V2, and P type) with two different diameters (16 and 19 mm [0.625, and 0.7 in. referred as #5 and #6, respectively]) provided by different manufacturers were embedded in concrete beams. After pre-cracking by bending tests, specimens were stored outdoors at the Riverside Campus of Texas A&M University in College Station, Texas. After 7 years of outdoor exposure, the GFRP bars were extracted from the concrete beams and tension tests were performed to estimate the residual tensile strength. Several physical tests were also performed to assess the potential changes in the material. It was found that the tensile capacity of the GFRP bars embedded in concrete decreased; however, no significant changes in modulus of elasticity (MOE) were observed. Using this data and limited data from the literature, a probabilistic capacity model was developed using Bayesian updating. The developed probabilistic capacity model appropriately accounts for statistical uncertainties, considering the influence of the missing variables and remaining error due to the inexact model form. In this study, the reduction in tensile strength of GFRP reinforcement embedded in concrete is a function of the diffusion rate of the resin matrix, bar diameter, and time. The probabilistic model predicts that smaller GFRP bars exhibit faster degradation in the tensile capacity than the larger GFRP bars. For the GFRP bars, the model indicates that the probability that the environmental reduction factor required by The American Concrete Institute (ACI) and the American Association of State Highway Transportation Officials (AASHTO) for the design of concrete structures containing GFRP reinforcement is below the required value is 0.4, 0.25, and 0.2 after 100 years for #3, #5, and #6, respectively. The ACI 440 and AASHTO design strength for smaller bars is likely not safe.
|
77 |
Model Updating Of A Helicopter Structure Using A Newly Developed Correlation Improvement TechniqueAltunel, Fatih 01 December 2009 (has links) (PDF)
Numerical model usage has substantially increased in many industries. It is the aerospace industry that numerical models play possibly the most important role for development of optimum design. However, numerical models need experimental verification. This experimental verification is used not only for validation, but also updating numerical model parameters. Verified and updated models are used to analyze a vast amount of cases that structure is anticipated to face in real life.
In this thesis, structural finite element model updating of a utility helicopter fuselage was performed as a case study. Initially, experimental modal analyses were performed using modal shakers. Modal analysis of test results was carried out using LMS Test.lab software. At the same time, finite element analysis of the helicopter fuselage was performed by MSC.Patran & / Nastran software.
v
Initial updating was processed first for the whole helicopter fuselage then, tail of the helicopter was tried to be updated.
Furthermore, a new method was proposed for the optimum node removal location for getting better Modal Assurance Criterion (MAC) matrix. This routine was tried on the helicopter case study and it showed better performance than the Coordinate Modal Assurance Criterion (coMAC) that is often used in such analyses.
|
78 |
Proposal of a rapid model updating and feedback control scheme for polymer flooding processesMantilla, Cesar A., 1976- 29 November 2010 (has links)
The performance of Enhanced Oil Recovery (EOR) processes is adversely affected by the heterogeneous distribution of flow properties of the rock. The effects of heterogeneity are further highlighted when the mobility ratio between the displacing and the displaced fluids is unfavorable. Polymer flooding aims to mitigate this by controlling the mobility ratio resulting in an increase in the volumetric swept efficiency. However, the design of the polymer injection process has to take into account the uncertainty due to a limited knowledge of the heterogeneous properties of the reservoir. Numerical reservoir models equipped with the most updated, yet uncertain information about the reservoir should be employed to optimize the operational settings. Consequently, the optimal settings are uncertain and should be revised as the model is updated. In this report, a feedback-control scheme is proposed with a model updating step that conditions prior reservoir models to newly obtained dynamic data, and this followed by an optimization step that adjusts well control settings to maximize (or minimize) an objective function.
An illustration of the implementation of the proposed closed-loop scheme is presented through an example where the rate settings of a well affected by water coning are adjusted as the reservoir models are updated. The revised control settings yield an increase in the final value of the objective function. Finally, a fast analog of a polymer flooding displacement that traces the movement of random particles from injectors to producers following probability rules that reflect the physics of the actual displacement is presented. The algorithm was calibrated against the full-physics simulation results from UTCHEM, the compositional chemical flow simulator developed at The University of Texas at Austin. This algorithm can be used for a rapid estimation of basic responses such as breakthrough time or recovery factor and to provide a simplified characterization the reservoir heterogeneity.
This report is presented to fulfill the requirements to obtain the degree of Master of Science in Engineering under fast track option. It summarizes the research proposal presented for my doctorate studies that are currently ongoing. / text
|
79 |
Feedback control of polymer flooding process considering geologic uncertaintyMantilla, Cesar A., 1976- 10 February 2011 (has links)
Polymer flooding is economically successful in reservoirs where the water flood mobility ratio is high, and/or the reservoir heterogeneity is adverse, because of the improved sweep resulting from the mobility-controlled oil displacement. The performance of a polymer flood can be further improved if the process is dynamically controlled using updated reservoir models and a closed-loop production optimization scheme is implemented. However, the formulation of an optimal production strategy is based on uncertain production forecasts resulting from uncertainty in spatial representation of reservoir heterogeneity, geologic scenarios, inaccurate modeling, scaling, just to cite a few factors. Assessing the uncertainty in reservoir modeling and transferring it to uncertainty in production forecasts is crucial for efficiently controlling the process. This dissertation presents a feedback control framework that (1) assesses uncertainty in reservoir modeling and production forecasts, (2) updates the prior uncertainty in reservoir models by integrating continuously monitored production data, and (3) formulates optimal injection/production rates for the updated reservoir models. This approach focuses on assessing uncertainty in reservoir modeling and production forecasts originated mainly by uncertain geologic scenarios and spatial variations of reservoir properties (heterogeneity). This uncertainty is mapped in a metric space created by comparing multiple reservoir models and measuring differences in effective heterogeneity related to well connectivity and well responses characteristic of polymer flooding.
Continuously monitored production data is used to refine the uncertainty map using a Bayesian inversion algorithm. In contrast to classical approach of history matching by model perturbation, a model selection problem is implemented where highly probable reservoir models are selected to represent the posterior uncertainty in production forecasts. The model selection procedure yields the posterior uncertainty associated with the reservoir model. The production optimization problem is solved using the posterior models and a proxy model of polymer flooding to rapidly evaluate the objective function and response surfaces to represent the relationship between well controls and an economic objective function. The value of the feedback control framework is demonstrated with two examples of polymer flooding where the economic performance was maximized. / text
|
80 |
Assessing cognitive spare capacity as a measure of listening effort using the Auditory Inference Span TestRönnberg, Niklas January 2014 (has links)
Hearing loss has a negative effect on the daily life of 10-15% of the world’s population. One of the most common ways to treat a hearing loss is to fit hearing aids which increases audibility by providing amplification. Hearing aids thus improve speech reception in quiet, but listening in noise is nevertheless often difficult and stressful. Individual differences in cognitive capacity have been shown to be linked to differences in speech recognition performance in noise. An individual’s cognitive capacity is limited and is gradually consumed by increasing demands when listening in noise. Thus, fewer cognitive resources are left to interpret and process the information conveyed by the speech. Listening effort can therefore be explained by the amount of cognitive resources occupied with speech recognition. A well fitted hearing aid improves speech reception and leads to less listening effort, therefore an objective measure of listening effort would be a useful tool in the hearing aid fitting process. In this thesis the Auditory Inference Span Test (AIST) was developed to assess listening effort by measuring an individual’s cognitive spare capacity, the remaining cognitive resources available to interpret and encode linguistic content of incoming speech input while speech understanding takes place. The AIST is a dual-task hearing-innoise test, combining auditory and memory processing, and requires executive processing of speech at different memory load levels. The AIST was administered to young adults with normal hearing and older adults with hearing impairment. The aims were 1) to develop the AIST; 2) to investigate how different signal-to-noise ratios (SNRs) affect memory performance for perceived speech; 3) to explore if this performance would interact with cognitive capacity; 4) to test if different background noise types would interact differently with memory performance for young adults with normal hearing; and 5) to examine if these relationships would generalize to older adults with hearing impairment. The AIST is a new test of cognitive spare capacity which uses existing speech material that is available in several countries, and manipulates simultaneously cognitive load and SNR. Thus, the design of AIST pinpoints potential interactions between auditory and cognitive factors. The main finding of this thesis was the interaction between noise type and SNR showing that decreased SNR reduced cognitive spare capacity more in speech-like noise compared to speech-shaped noise, even though speech intelligibility levels were similar between noise types. This finding applied to young adults with normal hearing but there was a similar effect for older adults with hearing impairment with the addition of background noise compared to no background noise. Task demands, MLLs, interacted with cognitive capacity, thus, individuals with less cognitive capacity were more sensitive to increased cognitive load. However, MLLs did not interact with noise type or with SNR, which shows that different memory load levels were not affected differently in different noise types or in different SNRs. This suggests that different cognitive mechanisms come into play for storage and processing of speech information in AIST and for listening to speech in noise. Thus, the results suggested that a test of cognitive spare capacity seems to be a useful way to assess listening effort, even though the AIST, in the design used in this thesis, might be too cognitively demanding to provide reliable results for all individuals.
|
Page generated in 0.0757 seconds