Spelling suggestions: "subject:"shape modeling."" "subject:"chape modeling.""
11 |
Development of statistical shape and intensity models of eroded scapulae to improve shoulder arthroplastySharif Ahmadian, Azita 22 December 2021 (has links)
Reverse Total shoulder arthroplasty (RTSA) is an effective treatment and a surgical alternative approach to conventional total shoulder arthroplasty for patients with severe rotator cuff tears and glenoid erosion. To help optimize RTSA design, it is necessary to gain insight into the geometry of glenoid erosions and consider their unique morphology across the entire bone. One of the most powerful tools to systematically quantify and
visualize the variation of bone geometry throughout a population is Statistical Shape Modeling (SSM); this method can assess the variation in the full shape of a bone, rather than of discrete anatomical features, which is very useful in identifying abnormalities, planning surgeries, and improving implant designs. Recently, many scapula SSMs have been presented in the literature; however, each has been created using normal and healthy bones. Therefore, creation of a scapula SSM derived exclusively from patients exhibiting
complex glenoid bone erosions is critical and significantly challenging.
In addition, several studies have quantified scapular bone properties in patients with complex glenoid erosion. However, because of their discrete nature these analyses cannot be used as the basis for Finite Element Modeling (FEM). Thus, a need exists to systematically quantify the variation of bone properties in a glenoid erosion patient population using a method that captures variation across the entire bone. This can be
achieved using Statistical Intensity Modeling (SIM), which can then generate scapula FEMs with realistic bone properties for evaluation of orthopaedic implants. Using an SIM enables researchers to generate models with bone properties that represent a specific, known portion of the population variation, which makes the findings more generalizable. Accordingly, the main purpose of this research is to develop an SSM and SIM to mathematically quantifying the variation of bone geometries in a systematic manner for the complex geometry of scapulae with severe glenoid erosion and to determine the main modes of variation in bone property distribution, which could be used for future FEM studies, respectively.
To draw meaningful statistical conclusions from the dataset, we need to compare and relate corresponding parts of the scapula. To achieve this correspondence, 3D triangulated mesh models of 61 scapulae were created from pre-operative CT scans from patients who were treated with RTSA and then a Non-Rigid (NR) registration method was used to morph one Atlas point cloud to the shapes of all other bones. However, the more complex the shape, the more difficult it is to maintain good correspondence. To overcome this challenge, we have adapted and optimized a NR-Iterative Closest Point (ICP) method and applied that on 61 eroded scapulae which results in each bone shape having identical mesh structure (i.e., same number and anatomical location of points). To assess the quality of our proposed algorithm, the resulting correspondence error was evaluated by comparing the positions of ground truth points and the corresponding point locations produced by the algorithm. The average correspondence error of all anatomical landmarks across the two observers was 2.74 mm with inter and intra-observer reliability of ±0.31 and ±0.06 mm. Moreover, the Root-Mean-Square (RMS) and Hausdorff errors of geometric registration between the original and the deformed models were calculated 0.25±0.04 mm and 0.76±0.14 mm, respectively.
After registration, Principal Component Analysis (PCA) is applied to the deformed models as a group to describe independent modes of variation in the dataset. The robustness of the SSM is also evaluated using three standard metrics: compactness, generality, and specificity. Regarding compactness, the first 9 principal modes of variations accounted for 95% variability, while the model’s generality error and the calculated specificity over 10,000 instances were found to be 2.6 mm and 2.99 mm, respectively.
The SIM results showed that the first mode of variation accounts for overall changes in intensity across the entire bone, while the second mode represented localized changes in the glenoid vault bone quality. The third mode showed changes in intensity at the posterior and inferior glenoid rim associated with posteroinferior glenoid rim erosion which suggests avoiding fixation in this region and preferentially placing screws in the anterosuperior region of the glenoid to improve implant fixation. / Graduate
|
12 |
Real-time visual tracking using image processing and filtering methodsHa, Jin-cheol 01 April 2008 (has links)
The main goal of this thesis is to develop real-time computer vision algorithms in order to detect and to track targets in uncertain complex environments purely based on a visual sensor. Two major
subjects addressed by this work are:
1. The development of fast and robust image
segmentation algorithms that are able to search and automatically detect targets in a given image.
2. The development of sound filtering algorithms to reduce the effects of noise in signals from the image processing. The main constraint of this research is that the algorithms should work in real-time with limited computing power on an onboard
computer in an aircraft. In particular, we focus on contour tracking which tracks the outline of the target represented by contours in the image plane. This thesis is concerned with three specific
categories, namely image segmentation, shape modeling, and signal filtering.
We have designed image segmentation algorithms based on geometric active contours implemented via level set methods. Geometric active contours are deformable contours that automatically track the
outlines of objects in images. In this approach, the contour in the image plane is represented as the zero-level set of a higher dimensional function. (One example of the higher dimensional
function is a three-dimensional surface for a two-dimensional contour.) This approach handles the topological changes (e.g., merging, splitting) of the contour naturally. Although geometric active contours prevail in many fields of computer vision, they suffer from the high computational costs associated with level set methods. Therefore, simplified versions of level set methods such as
fast marching methods are often used in problems of real-time visual tracking. This thesis presents the development of a fast and robust segmentation algorithm based on up-to-date extensions of level set methods and geometric active contours, namely a fast implementation of Chan-Vese's (active contour) model (FICVM).
The shape prior is a useful cue in the recognition of the true target. For the contour tracker, the outline of the target can be easily disrupted by noise. In geometric active contours, to cope with deviations from the true outline of the target, a higher dimensional function is constructed based on the shape prior, and the contour tracks the outline of an object by considering the difference between the higher dimensional functions obtained from
the shape prior and from a measurement in a given image. The higher dimensional function is often a distance map which requires high computational costs for construction. This thesis focuses on the
extraction of shape information from only the zero-level set of the higher dimensional function. This strategy compensates for inaccuracies in the calculation of the shape difference that occur
when a simplified higher dimensional function is used. This is named as contour-based shape modeling.
Filtering is an essential element in tracking problems because of the presence of noise in system models and measurements. The well-known Kalman filter provides an exact solution only for problems which have linear models and Gaussian distributions (linear/Gaussian problems). For nonlinear/non-Gaussian problems, particle filters have received much attention in recent years.
Particle filtering is useful in the approximation of complicated posterior probability distribution functions. However, the computational burden of particle filtering prevents it from performing at full capacity in real-time applications. This thesis
concentrates on improving the processing time of particle filtering for real-time applications.
In principle, we follow the particle filter in the geometric active contour framework. This thesis proposes an advanced blob tracking scheme in which a blob contains shape prior information of the
target. This scheme simplifies the sampling process and quickly suggests the samples which have a high probability of being the target. Only for these samples is the contour tracking algorithm applied to obtain a more detailed state estimate. Curve evolution in the contour tracking is realized by the FICVM. The dissimilarity measure is calculated by the contour based shape modeling method and
the shape prior is updated when it satisfies certain conditions. The new particle filter is applied to the problems of low contrast and severe daylight conditions, to cluttered environments, and to the
appearing/disappearing target tracking. We have also demonstrated the utility of the filtering algorithm for multiple target tracking in the presence of occlusions. This thesis presents several test results from simulations and flight tests. In these tests, the proposed algorithms demonstrated promising results in varied situations of tracking.
|
13 |
The use of a body-wide automatic anatomy recognition system in image analysis of kidneysMohammadianrasanani, Seyedmehrdad January 2013 (has links)
No description available.
|
14 |
The use of a body-wide automatic anatomy recognition system in image analysis of kidneysMohammadianrasanani, Seyedmehrdad January 2013 (has links)
No description available.
|
15 |
A tale of two applications: closed-loop quality control for 3D printing, and multiple imputation and the bootstrap for the analysis of big data with missingnessWenbin Zhu (12226001) 20 April 2022 (has links)
<div><b>1. A Closed-Loop Machine Learning and Compensation Framework for Geometric Accuracy Control of 3D Printed Products</b></div><div><b><br></b></div>Additive manufacturing (AM) systems enable direct printing of three-dimensional (3D) physical products from computer-aided design (CAD) models. Despite the many advantages that AM systems have over traditional manufacturing, one of their significant limitations that impedes their wide adoption is geometric inaccuracies, or shape deviations between the printed product and the nominal CAD model. Machine learning for shape deviations can enable geometric accuracy control of 3D printed products via the generation of compensation plans, which are modifications of CAD models informed by the machine learning algorithm that reduce deviations in expectation. However, existing machine learning and compensation frameworks cannot accommodate deviations of fully 3D shapes with different geometries. The feasibility of existing frameworks for geometric accuracy control is further limited by resource constraints in AM systems that prevent the printing of multiple copies of new shapes.<div><br></div><div>We present a closed-loop machine learning and compensation framework that can improve geometric accuracy control of 3D shapes in AM systems. Our framework is based on a Bayesian extreme learning machine (BELM) architecture that leverages data and deviation models from previously printed products to transfer deviation models, and more accurately capture deviation patterns, for new 3D products. The closed-loop nature of compensation under our framework, in which past compensated products that do not adequately meet dimensional specifications are fed into the BELMs to re-learn the deviation model, enables the identification of effective compensation plans and satisfies resource constraints by printing only one new shape at a time. The power and cost-effectiveness of our framework are demonstrated with two validation experiments that involve different geometries for a Markforged Metal X AM machine printing 17-4 PH stainless steel products. As demonstrated in our case studies, our framework can reduce shape inaccuracies by 30% to 60% (depending on a shape's geometric complexity) in at most two iterations, with three training shapes and one or two test shapes for a specific geometry involved across the iterations. We also perform an additional validation experiment using a third geometry to establish the capabilities of our framework for prospective shape deviation prediction of 3D shapes that have never been printed before. This third experiment indicates that choosing one suitable class of past products for prospective prediction and model transfer, instead of including all past printed products with different geometries, could be sufficient for obtaining deviation models with good predictive performance. Ultimately, our closed-loop machine learning and compensation framework provides an important step towards accurate and cost-efficient deviation modeling and compensation for fully 3D printed products using a minimal number of printed training and test shapes, and thereby can advance AM as a high-quality manufacturing paradigm.<br></div><div><br></div><div><b>2. Multiple Imputation and the Bootstrap for the Analysis of Big Data with Missingness</b></div><div><br></div><div>Inference can be a challenging task for Big Data. Two significant issues are that Big Data frequently exhibit complicated missing data patterns, and that the complex statistical models and machine learning algorithms typically used to analyze Big Data do not have convenient quantification of uncertainties for estimators. These two difficulties have previously been addressed using multiple imputation and the bootstrap, respectively. However, it is not clear how multiple imputation and bootstrap procedures can be effectively combined to perform statistical inferences on Big Data with missing values. We investigate a practical framework for the combination of multiple imputation and bootstrap methods. Our framework is based on two principles: distribution of multiple imputation and bootstrap calculations across parallel computational cores, and the quantification of sources of variability involved in bootstrap procedures that use subsampling techniques via random effects or hierarchical models. This framework effectively extends the scope of existing methods for multiple imputation and the bootstrap to a broad range of Big Data settings. We perform simulation studies for linear and logistic regression across Big Data settings with different rates of missingness to characterize the frequentist properties and computational efficiencies of the combinations of multiple imputation and the bootstrap. We further illustrate how effective combinations of multiple imputation and the bootstrap for Big Data analyses can be identified in practice by means of both the simulation studies and a case study on COVID infection status data. Ultimately, our investigation demonstrates how the flexible combination of multiple imputation and the bootstrap under our framework can enable valid statistical inferences in an effective manner for Big Data with missingness.<br></div>
|
Page generated in 0.3732 seconds