Spelling suggestions: "subject:"data deconstruction"" "subject:"data areconstruction""
1 |
Biomechanical Assessment of a Human Joint under Natural and Clinically Modified Conditions: The ShoulderBernal Covarrubias, Rafael Ricardo January 2015 (has links)
Unbalanced muscle forces in the shoulder joint may lead to functional impairment in the setting of rotator cuff tear and progressive arthritis in cuff tear arthropathy. A model, which predicts muscle forces for common shoulder movements, could be used to help in treatment decision-making and in improving the design of total shoulder prosthesis. Unfortunately, the shoulder has many muscles that overlap in function leading to an indeterminate system. A finite element model employing an optimization algorithm could be used to reduce the number of degrees of freedom and predict loading of the glenohumeral joint. The goal of this study was to develop an anatomically and physiologically correct computational model of the glenohumeral joint. This model was applied to: 1) estimate the force in each muscle during the standard glenohumeral motions (flexion/extension, abduction/adduction and internal/ external rotation), and 2) determine stress concentrations within the scapula during these motions. These goals were realized through the following steps: First, a three dimensional bone reconstruction was performed using computed tomography (CT) scan data. This allowed for a precise anatomical representation of the bony components. Then muscle lever arms were estimated based on the reconstructed bones using computer-aided design software. The origins, insertions, and muscle paths were obtained from the literature. This model was then applied to estimate the forces within each of the muscles that are necessary to stabilize the joint at a fixed position. Last, finite element analysis of the scapula was performed to study the stress concentrations. These were identified and related to the morphology of the bone. A force estimation algorithm was then developed to determine the necessary muscle force distribution. This algorithm was based on an applied external moment at the joint, and the appropriate selection of muscles that could withstand it, ensuring stability, while keeping the reaction force at a minimum. This method offered an acceptable solution to the indeterminate problem, a unique solution was found for each shoulder motion. The model was then applied to determine the stress concentration within various regions of the scapula for each of the shoulder motions. The rotator cuff was found to act as the main stabilizer under rotation, and had a significant stabilizing role under flexion and abduction. The finite element model of the shoulder that was developed can be used to gain a better understanding of the load transfer mechanisms within the glenohumeral joint and the impact of muscle forces on scapular morphology. This information can then be used to assist with treatment decision-making for rotator cuff tears and with the design of new implants for total shoulder arthroplasty.
|
2 |
Near wall high resolution particle image velocimetry and data reconstruction for high speed flowsRaben, Samuel 06 June 2008 (has links)
The aim of this work was to understand the physical requirements as well as to develop methodology required to employ Time Resolved Digital Particle Image Velocimetry (TRDPIV) for measuring high speed, high magnification, near wall flow fields. Previous attempts to perform measurements such as this have been unsuccessful because of both limitations in equipment as well as proper methodology for processing of the data. This work addresses those issues and successfully demonstrates a test inside of a transonic turbine cascade as well as a high speed high magnification wall jet.
From previous studies it was established that flow tracer delivery is not a trivial task in a high speed high back pressure environment. Any TRDPIV measurement requires uniform spatial seeding density, but time-resolved measurements require uniform temporal seeding density as well. To this end, a high pressure particle generator was developed. This advancement enhanced current capability beyond what was previously attainable. Unfortunately, this was not sufficient to resolve the issue of seeding all together, and an advanced data reconstruction methodology was developed to reconstruct areas of the flow field that where lost do to inhomogeneous seeding. This reconstruction methodology, based on Proper Orthogonal Decomposition (POD), has been shown to produce errors in corrected velocities below tradition spatial techniques alone. The combination of both particle generator and reconstruction methodology was instrumental for successfully acquiring TRDPIV measurements in a high speed high pressure environment such as a transonic wind tunnel facility.
This work also investigates the development of a turbulent wall jet. This experiment helped in demonstrating the capability of taking high speed high magnification TRDPIV measurements. This experiment was very unique in that it is one of only a few experiments that studied the developing region of these jets. The Reynolds number ranged for this experiment from 150 – 10,000 which corresponded to velocities of 1 - 80 m/s. The results from this experiment showed good agreement with currently published time averaged data. Using scaling laws for fully developed jets a new scaling law was found for the developing region of the jet that could be applied to all Reynolds numbers in this study. A temporal investigation was also carried out using the temporal coefficients from POD. A vortex identification scheme was also applied to all of the Reynolds numbers showing clear trends as Reynolds number increased. / Master of Science
|
3 |
Reconstruction de champs aérodynamiques à partir de mesures ponctuelles / Reconstruction of turbulent velocity fields from punctual measurementsArnault, Anthony 13 December 2016 (has links)
Le suivi en temps réel des écoulements turbulents est une tâche difficile ayant des applications dans de nombreux domaines. Un exemple est la mesure des tourbillons de sillage au niveau des pistes d’aéroports afin d’optimiser la distance entre les avions en phase d’approche ou de décollage. Un autre exemple se rapporte au contrôle actif d’écoulements. De tels contrôles peuvent servir à réduire le bruit des avions... Cette thèse vise à développer des outils afin d’estimer en temps réel des champs de vitesse d’écoulements turbulents à partir d’un faible nombre de mesures ponctuelles. Après une étude bibliographique centrée sur une méthode de reconstruction populaire, l’estimation stochastique (SE), ses performances sont évaluées pour la prédiction de champs de vitesse issus d’écoulements de complexité croissante. La précision des estimations obtenues étant très faibles dans certains cas, une analyse précise de la méthode est effectuée. Celle-ci a montré l’effet filtrant de la SE sur le contenu spatial et temporel des champs de vitesse. De plus, le fort impact de la position des capteurs a été mis en avant. C’est pourquoi un algorithme d’optimisation de la position des capteurs est ensuite présenté. Bien que l’optimisation de la position des capteurs mène à une amélioration de la précision des prédictions obtenues par SE, elle reste néanmoins très faible pour certains cas tests. L’utilisation d’une technique issue du domaine de l’assimilation de données, le filtre de Kalman qui combine un modèle dynamique de l’écoulement avec les mesures, a donc été étudiée. Pour certains écoulements, le filtre de Kalman permet d’obtenir des prédictions plus précises que la SE. / Real time monitoring of turbulent flows is a challenging task that concerns a large range of applications. Evaluating wake vortices around the approach runway of an airport, in order to optimize the distance between lined-up aircraft, is an example. Another one touches to the broad subject of active flow control. In aerodynamic, control of detached flows is an essential issue. Such a control can serve to reduce noise produced by airplanes, or improve their aerodynamic performances. This work aims at developing tools to produce real time prediction of turbulent velocity fields from a small number of punctual sensors. After a literature review focused on a popular reconstruction method in fluid mechanics, the Stochastic Estimation (SE), the first step was to evaluate its overall prediction performances on several turbulent flows of gradual complexity. The accuracy of the SE being very limited in some cases, a deeper characterization of the method was performed. The filtering effect of the SE in terms of spatial and temporal content was particularly highlighted. This characterization pointed out the strong influence of the sensor locations on the estimation quality. Therefore, a sensor location optimization algorithm was proposed and extended to the choice of time delays when using Multi-Time-Delay SE. While using optimized locations for the sensors hold some accuracy improvements, they were still insufficient for some test cases. The opportunity to use a data assimilation method, the Kalman filter that combines a dynamic model of the flow with sensor information, was investigated. For some cases, the results were promising and the Kalman filter outperforms all SE methods.
|
4 |
Efficient Knot Optimization for Accurate B-spline-based Data ApproximationYo-Sing Yeh (9757565) 14 December 2020
<div>Many practical applications benefit from the reconstruction of a smooth multivariate function from discrete data for purposes such as reducing file size or improving analytic and visualization performance. Among the different reconstruction methods, tensor product B-spline has a number of advantageous properties over alternative data representation. However, the problem of constructing a best-fit B-spline approximation effectively contains many roadblocks. Within the many free parameters in the B-spline model, the choice of the knot vectors, which defines the separation of each piecewise polynomial patch in a B-spline construction, has a major influence on the resulting reconstruction quality. Yet existing knot placement methods are still ineffective, computationally expensive, or impose limitations on the dataset format or the B-spline order. Moving beyond the 1D cases (curves) and onto higher dimensional datasets (surfaces, volumes, hypervolumes) introduces additional computational challenges as well. Further complications also arise in the case of undersampled data points where the approximation problem can become ill-posed and existing regularization proves unsatisfactory.</div><div><br></div><div>This dissertation is concerned with improving the efficiency and accuracy of the construction of a B-spline approximation on discrete data. Specifically, we present a novel B-splines knot placement approach for accurate reconstruction of discretely sampled data, first in 1D, then extended to higher dimensions for both structured and unstructured formats. Our knot placement methods take into account the feature or complexity of the input data by estimating its high-order derivatives such that the resulting approximation is highly accurate with a low number of control points. We demonstrate our method on various 1D to 3D structured and unstructured datasets, including synthetic, simulation, and captured data. We compare our method with state-of-the-art knot placement methods and show that our approach achieves higher accuracy while requiring fewer B-spline control points. We discuss a regression approach to the selection of the number of knots for multivariate data given a target error threshold. In the case of the reconstruction of irregularly sampled data, where the linear system often becomes ill-posed, we propose a locally varying regularization scheme to address cases for which a straightforward regularization fails to produce a satisfactory reconstruction.</div>
|
5 |
Modelling user interaction at scale with deep generative methods / Storskalig modellering av användarinteraktion med djupa generativa metoderIonascu, Beatrice January 2018 (has links)
Understanding how users interact with a company's service is essential for data-driven businesses that want to better cater to their users and improve their offering. By using a generative machine learning approach it is possible to model user behaviour and generate new data to simulate or recognize and explain typical usage patterns. In this work we introduce an approach for modelling users' interaction behaviour at scale in a client-service model. We propose a novel representation of multivariate time-series data as time pictures that express temporal correlations through spatial organization. This representation shares two key properties that convolutional networks have been built to exploit and allows us to develop an approach based on deep generative models that use convolutional networks as backbone. In introducing this approach of feature learning for time-series data, we expand the application of convolutional neural networks in the multivariate time-series domain, and specifically user interaction data. We adopt a variational approach inspired by the β-VAE framework in order to learn hidden factors that define different user behaviour patterns. We explore different values for the regularization parameter β and show that it is possible to construct a model that learns a latent representation of identifiable and different user behaviours. We show on real-world data that the model generates realistic samples, that capture the true population-level statistics of the interaction behaviour data, learns different user behaviours, and provides accurate imputations of missing data. / Förståelse för hur användare interagerar med ett företags tjänst är essentiell för data-drivna affärsverksamheter med ambitioner om att bättre tillgodose dess användare och att förbättra deras utbud. Generativ maskininlärning möjliggör modellering av användarbeteende och genererande av ny data i syfte att simulera eller identifiera och förklara typiska användarmönster. I detta arbete introducerar vi ett tillvägagångssätt för storskalig modellering av användarinteraktion i en klientservice-modell. Vi föreslår en ny representation av multivariat tidsseriedata i form av tidsbilder vilka representerar temporala korrelationer via spatial organisering. Denna representation delar två nyckelegenskaper som faltningsnätverk har utvecklats för att exploatera, vilket tillåter oss att utveckla ett tillvägagångssätt baserat på på djupa generativa modeller som bygger på faltningsnätverk. Genom att introducera detta tillvägagångssätt för tidsseriedata expanderar vi applicering av faltningsnätverk inom domänen för multivariat tidsserie, specifikt för användarinteraktionsdata. Vi använder ett tillvägagångssätt inspirerat av ramverket β-VAE i syfte att lära modellen gömda faktorer som definierar olika användarmönster. Vi utforskar olika värden för regulariseringsparametern β och visar att det är möjligt att konstruera en modell som lär sig en latent representation av identifierbara och multipla användarbeteenden. Vi visar med verklig data att modellen genererar realistiska exempel vilka i sin tur fångar statistiken på populationsnivå hos användarinteraktionsdatan, samt lär olika användarbeteenden och bidrar med precisa imputationer av saknad data.
|
6 |
Um estudo sobre algoritmos de interpolação de sequencias numericas / A study of algorithms for interpolation of numerical sequencesDelgado, Eric Magalhães 14 August 2018 (has links)
Orientador: Max Henrique Machado Costa / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-14T19:10:17Z (GMT). No. of bitstreams: 1
Delgado_EricMagalhaes_M.pdf: 9006828 bytes, checksum: 1af1a945e326c075901fa8c1f0341128 (MD5)
Previous issue date: 2009 / Resumo: Esta dissertação apresenta um estudo sobre algoritmos de interpolação e dizimação de sequências numéricas, cujos filtros são derivados do filtro de reconstrução ideal. É proposto um algoritmo adaptativo de interpolação cúbica e avaliado os ganhos deste algoritmo quando comparado aos algoritmos clássicos. A idéia é explorar o compromisso entre qualidade e complexidade dos filtros de interpolação. A adaptação do filtro, obtida através de estimativas espaciais e espectrais da sequência a ser interpolada, é útil já que proporciona um uso eficiente de filtros complexos em regiões criticas como, por exemplo, regiões de borda de uma imagem. Simulações em imagens típicas mostram um ganho quantitativo significativo do algoritmo adaptativo quando comparado aos algoritmos clássicos. Além disso, é analisado o algoritmo de interpolação quando existe informação do processo de aquisição da sequência a ser interpolada. / Abstract: This dissertation presents a study on interpolation and decimation algorithms of numerical sequences, whose filters are derived from the ideal reconstruction filter. An adaptive algorithm of cubic interpolation is proposed and the gains of this algorithm is analized by comparing with the classic algorithms. The idea is to explore the trade-off between quality and complexity of the interpolation filters. The adaptation of the filter, obtained from spacial and spectral estimates of the sequence to be interpolated, is useful because it provides an efficient use of complex filter in critical regions as, for example, regions of edge of an image. Simulations in typical images show a significant quantitative gain of the adaptive algorithm when compared to classical algorithms. Furthermore, an interpolation algorithm is analyzed based on the knowledge of the acquisition process of the sequence to be interpolated. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
Page generated in 0.1107 seconds