371 |
Micromechanical Behavior of Fiber-Reinforced Composites using Finite Element Simulation and Deep LearningSepasdar, Reza 07 October 2021 (has links)
This dissertation studies the micromechanical behavior of high-performance carbon fiber-reinforced polymer (CFRP) composites through high-fidelity numerical simulations. We investigated multiple transverse cracking of cross-ply CFRP laminates on the microstructure level through simulating large numerical models. Such an investigation demands an efficient numerical framework along with significant computational power. Hence, an efficient numerical framework was developed for simulating 2-D representations of CFRP composites' microstructure. The framework utilizes a nonlinear interface-enriched generalized finite element method (IGFEM) scheme which significantly decreases the computational cost. The framework was also designed to be fast and memory-efficient to enable simulating large numerical models. By utilizing the developed framework, the impacts of a few parameters on the evolution of transverse crack density in cross-ply CFRP laminates were studied. The considered parameters were characteristics of fiber/matrix cohesive interfaces, matrix stiffness, $0^{circ}$~plies longitudinal stiffness. We also developed a micromechanical framework for efficient and accurate simulation of damage propagation and failure in aligned discontinuous carbon fiber-reinforced composites under loading along the fibers' direction. The framework was validated based on the experimental results of a recently developed 3-D printed aligned discontinuous carbon fiber-reinforced composite as the composite of interest. The framework was then utilized to investigate the impacts of a few parameters of the constitutive equations on the strength and failure pattern of the composites of interest. This dissertation also contributes towards improving the computational efficiency of CFRP composites' simulations. We exhaustively investigated the cause of a convergence difficulty in finite element analyses caused by cohesive zone models (CZMs) which are commonly used to simulate fiber/matrix interfaces in CFRP composites. The CZMs' convergence difficulty significantly increases the computational burden. For the first time, we explained the root of the convergence difficulty and proposed a simple technique to overcome the convergence issue. The proposed technique outperformed the existing methods in terms of accuracy and computational cost. We also proposed a deep learning framework for predicting full-field distributions of mechanical responses in 2-D representations of CFRP composites based on the geometry of the microstructures. The deep learning framework can be used as a surrogate to the expensive and time-consuming finite element simulations. The proposed framework was able to accurately predict the stress distribution at an early stage of damage initiation and the failure pattern in representations of CFRP composites microstructure under transverse tension. / Doctor of Philosophy / Carbon fiber-reinforced polymers (CFRPs) are materials that are lightweight with excellent mechanical performance. Hence, these materials have a wide range of applications in various industries such as aerospace, automotive, and civil engineering. The extensive use of CFRPs has made them an active area of research and there have been great efforts to better understand and improve the mechanical properties of these materials over the past few decades. Therefore, CFRP materials and their manufacturing process are constantly changing and new types of CFRPs are kept being developed. As a result, the mechanical behavior of CFRPs needs to be exhaustively investigated to provide guidelines for their optimal engineering design and indicate the future direction of manufacturing improvements. This dissertation studied the mechanical behavior of CFRPs through high-fidelity simulations. Two types of CFRP were investigated: laminates and 3-D printed CFRPs. Laminates are the most popular type of CFRPs which are commonly used to construct the body of aircraft. 3-D printed CFRPs are new types of material that are gaining traction due to their ability to construct structures with complex geometries at high speed and without direct human supervision. The numerical simulations of CFRPs under mechanical loading are time-consuming and require significant computational power even when run on a supercomputer. Hence, this dissertation also contributes to improving the computational efficiency of numerical simulations. To decrease the computational cost, we proposed a technique that can significantly speed up the numerical simulations of CFRPs. Moreover, we utilized artificial intelligence to develop a new framework that can be substituted for the expensive and time-consuming conventional numerical simulations to quickly predict specific mechanical responses of CFRPs.
|
372 |
Machine Learning and Field Inversion approaches to Data-Driven Turbulence ModelingMichelen Strofer, Carlos Alejandro 27 April 2021 (has links)
There still is a practical need for improved closure models for the Reynolds-averaged Navier-Stokes (RANS) equations. This dissertation explores two different approaches for using experimental data to provide improved closure for the Reynolds stress tensor field. The first approach uses machine learning to learn a general closure model from data. A novel framework is developed to train deep neural networks using experimental velocity and pressure measurements. The sensitivity of the RANS equations to the Reynolds stress, required for gradient-based training, is obtained by means of both variational and ensemble methods. The second approach is to infer the Reynolds stress field for a flow of interest from limited velocity or pressure measurements of the same flow. Here, this field inversion is done using a Monte Carlo Bayesian procedure and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. The two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions. / Doctor of Philosophy / The Reynolds-averaged Navier-Stokes (RANS) equations are widely used to simulate fluid flows in engineering applications despite their known inaccuracy in many flows of practical interest. The uncertainty in the RANS equations is known to stem from the Reynolds stress tensor for which no universally applicable turbulence model exists. The computational cost of more accurate methods for fluid flow simulation, however, means RANS simulations will likely continue to be a major tool in engineering applications and there is still a need for improved RANS turbulence modeling. This dissertation explores two different approaches to use available experimental data to improve RANS predictions by improving the uncertain Reynolds stress tensor field. The first approach is using machine learning to learn a data-driven turbulence model from a set of training data. This model can then be applied to predict new flows in place of traditional turbulence models. To this end, this dissertation presents a novel framework for training deep neural networks using experimental measurements of velocity and pressure. When using velocity and pressure data, gradient-based training of the neural network requires the sensitivity of the RANS equations to the learned Reynolds stress. Two different methods, the continuous adjoint and ensemble approximation, are used to obtain the required sensitivity. The second approach explored in this dissertation is field inversion, whereby available data for a flow of interest is used to infer a Reynolds stress field that leads to improved RANS solutions for that same flow. Here, the field inversion is done via the ensemble Kalman inversion (EKI), a Monte Carlo Bayesian procedure, and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. While further development is needed, the two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions.
|
373 |
Summarizing Legal DepositionsChakravarty, Saurabh 18 January 2021 (has links)
Documents like legal depositions are used by lawyers and paralegals to ascertain the facts
pertaining to a case. These documents capture the conversation between a lawyer and a
deponent, which is in the form of questions and answers. Applying current automatic summarization
methods to these documents results in low-quality summaries. Though extensive
research has been performed in the area of summarization, not all methods succeed in all
domains. Accordingly, this research focuses on developing methods to generate high-quality
summaries of depositions. As part of our work related to legal deposition summarization, we
propose a solution in the form of a pipeline of components, each addressing a sub-problem;
we argue that a pipeline based framework can be tuned to summarize documents from any
domain.
First, we developed methods to parse the depositions, accounting for different document
formats. We were able to successfully parse both a proprietary and a public dataset with
our methods. We next developed methods to anonymize the personal information present in
the deposition documents; we achieve 95% accuracy on the anonymization using a random
sampling based evaluation. Third, we developed an ontology to define dialog acts for the
questions and answers present in legal depositions. Fourth, we developed classifiers based
on this ontology and achieved F1-scores of 0.84 and 0.87 on the public and proprietary
datasets, respectively. Fifth, we developed methods to transform a question-answer pair to
a canonical/simple form. In particular, based on the dialog acts for the question and answer
combination, we developed transformation methods using each of traditional NLP, and deep
learning, techniques. We were able to achieve good scores on the ROUGE and semantic similarity
metrics for most of the dialog act combinations. Sixth, we developed methods based
on deep learning, heuristics, and machine translation to correct the transformed declarative
sentences. The sentence correction improved the readability of the transformed sentences.
Seventh, we developed a methodology to break a deposition into its topical aspects. An
ontology for aspects was defined for legal depositions, and classifiers were developed that
achieved an F1-score of 0.89. Eighth, we developed methods to segment the deposition into
parts that have the same thematic context. The segments helped in augmenting candidate
summary sentences with surrounding context, that leads to a more readable summary.
Ninth, we developed a pipeline to integrate all of the methods, to generate summaries from
the depositions. We were able to outperform the baseline and state of the art summarization
methods in a majority of the cases based on the F1, Recall, and ROUGE-2 scores. The performance
gains were statistically significant for all of the scores. The summaries generated
by our system can be arranged based on the same thematic context or aspect and hence
should be much easier to read and follow, compared to the baseline methods. As part of our
future work, we will improve upon these methods. We will refine our methods to identify
the important parts using additional documents related to a deposition. In addition, we will
work to improve the compression ratio of the generated summaries by reducing the number
of unimportant sentences. We will expand the training dataset to learn and tune the coverage
of the aspects for various deponent types using empirical methods.
Our system has demonstrated effectiveness in transforming a QA pair into a declarative
sentence. Having such a capability could enable us to generate a narrative summary from
the depositions, a first for legal depositions. We will also expand our dataset for evaluation
to ensure that our methods are indeed generalizable, and that they work well when experts
subjectively evaluate the quality of the deposition summaries. / Doctor of Philosophy / Documents in the legal domain are of various types. One set of documents includes trial and
deposition transcripts. These documents capture the proceedings of a trial or a deposition
by note-taking, often over many hours. They contain conversation sentences that are spoken
during the trial or deposition and involve multiple actors. One of the greatest challenges
with these documents is that generally, they are long. This is a source of pain for attorneys
and paralegals who work with the information contained in the documents.
Text summarization techniques have been successfully used to compress a document and capture
the salient parts from it. They have also been able to reduce redundancy in summary
sentences while focusing on coherence and proper sentence formation. Summarizing trial and
deposition transcripts would be immensely useful for law professionals, reducing the time to
identify and disseminate salient information in case related documents, as well as reducing
costs and trial preparation time. Processing the deposition documents using traditional text
processing techniques is a challenge because of their form. Having the deposition conversations
transformed into a suitable declarative form where they can be easily comprehended
can pave the way for the usage of extractive and abstractive summarization methods. As
part of our work, we identified the different discourse structures present in the deposition
in the form of dialog acts. We developed methods based on those dialog acts to transform
the deposition into a declarative form. We were able to achieve an accuracy of 87% on the
dialog act classification. We also were able to transform the conversational question-answer
(QA) pairs into declarative forms for 10 of the top-11 dialog act combinations. Our transformation
methods performed better in 8 out of the 10 QA pair types, when compared to the
baselines. We also developed methods to classify the deposition QA pairs according to their
topical aspects. We generated summaries using aspects by defining the relative coverage for
each aspect that should be present in a summary. Another set of methods developed can
segment the depositions into parts that have the same thematic context. These segments
aid augmenting the candidate summary sentences, to create a summary where information
is surrounded by associated context. This makes the summary more readable and informative;
we were able to significantly outperform the state of the art methods, based on our
evaluations.
|
374 |
Deep Learning for Enhancing Precision MedicineOh, Min 07 June 2021 (has links)
Most medical treatments have been developed aiming at the best-on-average efficacy for large populations, resulting in treatments successful for some patients but not for others. It necessitates the need for precision medicine that tailors medical treatment to individual patients. Omics data holds comprehensive genetic information on individual variability at the molecular level and hence the potential to be translated into personalized therapy. However, the attempts to transform omics data-driven insights into clinically actionable models for individual patients have been limited. Meanwhile, advances in deep learning, one of the most promising branches of artificial intelligence, have produced unprecedented performance in various fields. Although several deep learning-based methods have been proposed to predict individual phenotypes, they have not established the state of the practice, due to instability of selected or learned features derived from extremely high dimensional data with low sample sizes, which often results in overfitted models with high variance. To overcome the limitation of omics data, recent advances in deep learning models, including representation learning models, generative models, and interpretable models, can be considered. The goal of the proposed work is to develop deep learning models that can overcome the limitation of omics data to enhance the prediction of personalized medical decisions. To achieve this, three key challenges should be addressed: 1) effectively reducing dimensions of omics data, 2) systematically augmenting omics data, and 3) improving the interpretability of omics data. / Doctor of Philosophy / Most medical treatments have been developed aiming at the best-on-average efficacy for large populations, resulting in treatments successful for some patients but not for others. It necessitates the need for precision medicine that tailors medical treatment to individual patients. Biological data such as DNA sequences and snapshots of genetic activities hold comprehensive information on individual variability and hence the potential to accelerate personalized therapy. However, the attempts to transform data-driven insights into clinical models for individual patients have been limited. Meanwhile, advances in deep learning, one of the most promising branches of artificial intelligence, have produced unprecedented performance in various fields. Although several deep learning-based methods have been proposed to predict individual treatment or outcome, they have not established the state of the practice, due to the complexity of biological data and limited availability, which often result in overfitted models that may work on training data but not on test data or unseen data. To overcome the limitation of biological data, recent advances in deep learning models, including representation learning models, generative models, and interpretable models, can be considered. The goal of the proposed work is to develop deep learning models that can overcome the limitation of omics data to enhance the prediction of personalized medical decisions. To achieve this, three key challenges should be addressed: 1) effectively reducing the complexity of biological data, 2) generating realistic biological data, and 3) improving the interpretability of biological data.
|
375 |
Solutions to Passageways Detection in Natural Foliage with Biomimetic Sonar RobotWang, Ruihao 22 June 2022 (has links)
Numerous bats species have evolved biosonar to obtain information from their habitats with dense vegetation. Different from man-made sensors, such as stereo cameras and LiDAR, bats' biosonar has much lower spatial resolution and sampling rates. Their biosonar is capable of reliably finding narrow gaps in foliage to serve as a passageway to fly through. To investigate the sensory information under such capability, we have used a biomimetic sonar robot to collect the narrow gap echoes from an artificial hedge in a laboratory setup and from the natural foliage in outdoor environments respectively. The work in this dissertation presents the performance of a conventional energy approach and a deep-learning approach in the classification of echoes from foliage and gap. The deep-learning approach has better foliage versus passageway classification accuracy than the energy approach in both experiments, and it also shows good robustness than the latter one when dealing with data with great varieties in the outdoor experiments. A class activation mapping approach indicates that the initial rising flank inside the echo spectrogram contains critical information. This result corresponds to the neuromorphic spiking model which could be simplified as times where the echo amplitude crosses a certain threshold in a certain frequency range. With these findings, it could be demonstrated that the sensory information in clutter echoes plays an important role in detecting passageways in foliage regardless of the wider beamwith than the passageway geometry. / Doctor of Philosophy / Many bats species are able to navigate and hunt in habitats with dense vegetation based on trains of biosonar echoes as their primary sources for sensory information on the environment. Drones equipped with man-made sensory systems such as optical, thermal, or LiDAR sensors, still face challenges when navigating in dense foliage. Bats are not only able to achieve higher reliability in detecting narrow gaps but accomplish this with much lower spatial resolutions and data rates than those of man-made sensors. To study which sensory information is accessible to bat biosonar for detecting passageways in foliage, a robot consisting of a biomimetic sonar and a camera system has been used to collect a large number of echoes and corresponding images (∼130k samples) from an artificial hedge constructed in the laboratory and various natural foliage targets found outdoors. We have applied a conventional energy approach which is widely used in engineered sonar but is limited by the biosonar's wide beamwidth and only achieves a foliage-versus-passageway classification accuracy of ∼70%. To deal with this situation, a deep-learning approach has been used to improve performance. Besides that, a transparent AI approach has been applied to overcome the black-box property and highlight the region of interest of the deep-learning classifier. The results achieved in detecting passageways were closely matched between the artificial hedge in the laboratory setup and the field data. With the best classification accuracy of 97.13% (artificial hedge) and 96.64% (field data) by the deep-learning approach, this work indicates the potential of exploring sensory information based on clutter echoes from complex environments for detecting passageways in foliage.
|
376 |
A Deep-learning based Approach for Foot Placement PredictionLee, Sung-Wook 24 May 2023 (has links)
Foot placement prediction can be important for exoskeleton and prosthesis controllers, human-robot interaction, or body-worn systems to prevent slips or trips. Previous studies investigating foot placement prediction have been limited to predicting foot placement during the swing phase, and do not fully consider contextual information such as the preceding step or the stance phase before push-off. In this study, a deep learning-based foot placement prediction approach was proposed, where the deep learning models were designed to sequentially process data from three IMU sensors mounted on pelvis and feet. The raw sensor data are pre-processed to generate multi-variable time-series data for training two deep learning models, where the first model estimates the gait progression and the second model subsequently predicts the next foot placement. The ground truth gait phase data and foot placement data are acquired from a motion capture system. Ten healthy subjects were invited to walk naturally at different speeds on a treadmill. In cross-subject learning, the trained models had a mean distance error of 5.93 cm for foot placement prediction. In single-subject learning, the prediction accuracy improved with additional training data, and a mean distance error of 2.60 cm was achieved by fine-tuning the cross-subject validated models with the target subject data. Even from 25-81% in the gait cycle, mean distance errors were only 6.99 cm and 3.22 cm for cross-subject learning and single-subject learning, respectively / Master of Science / This study proposes a new approach for predicting where a person's foot will land during walking, which could be useful in controlling robots and wearable devices that work with humans to prevent events such as slips and falls and allow for more smooth human-robot interactions. Although foot placement prediction has great potential in various domains, current works in this area are limited in terms of practicality and accuracy. The proposed approach uses data from inertial sensors attached to the pelvis and feet, and two deep learning models are trained to estimate the person's walking pattern and predict their next foot placement. The approach was tested on ten healthy individuals walking at different speeds on a treadmill, and achieved state-of-the-arts results. The results suggest that this approach could be a promising method when sufficient data from multiple people are available.
|
377 |
Parkinson's Disease Automated Hand Tremor Analysis from Spiral ImagesDeSipio, Rebecca E. 05 1900 (has links)
Parkinson’s Disease is a neurological degenerative disease affecting more than six million people worldwide. It is a progressive disease, impacting a person’s movements and thought processes. In recent years, computer vision and machine learning researchers have been developing techniques to aid in the diagnosis. This thesis is motivated by the exploration of hand tremor symptoms in Parkinson’s patients from the Archimedean Spiral test, a paper-and-pencil test used to evaluate hand tremors. This work presents a novel Fourier Domain analysis technique that transforms the pencil content of hand-drawn spiral images into frequency features. Our technique is applied to an image dataset consisting of spirals drawn by healthy individuals and people with Parkinson’s Disease. The Fourier Domain analysis technique achieves 81.5% accuracy predicting images drawn by someone with Parkinson’s, a result 6% higher than previous methods. We compared this method against the results using extracted features from the ResNet-50 and VGG16 pre-trained deep network models. The VGG16 extracted features achieve 95.4% accuracy classifying images drawn by people with Parkinson’s Disease. The extracted features of both methods were also used to develop a tremor severity rating system which scores the spiral images on a scale from 0 (no tremor) to 1 (severe tremor). The results show correlation to the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) developed by the International Parkinson and Movement Disorder Society. These results can be useful for aiding in early detection of tremors, the medical treatment process, and symptom tracking to monitor the progression of Parkinson’s Disease. / M.S. / Parkinson’s Disease is a neurological degenerative disease affecting more than six million people worldwide. It is a progressive disease, impacting a person’s movements and thought processes. In recent years, computer vision and machine learning researchers have been developing techniques to aid in the diagnosis. This thesis is motivated by the exploration of hand tremor symptoms in Parkinson’s patients from the Archimedean Spiral test, a paper-and-pencil test used to evaluate hand tremors. This work presents a novel spiral analysis technique that converts the pencil content of hand-drawn spirals into numeric values, called features. The features measure spiral smoothness. Our technique is applied to an image dataset consisting of spirals drawn by healthy and Parkinson’s individuals. The spiral analysis technique achieves 81.5% accuracy predicting images drawn by someone with Parkinson’s. We compared this method against the results using extracted features from pre-trained deep network models. The VGG16 pre-trained model extracted features achieve 95.4% accuracy classifying images drawn by people with Parkinson’s Disease. The extracted features of both methods were also used to develop a tremor severity rating system which scores the spiral images on a scale from 0 (no tremor) to 1 (severe tremor). The results show a similar trend to the tremor evaluations rated by the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) developed by the International Parkinson and Movement Disorder Society. These results can be useful for aiding in early detection of tremors, the medical treatment process, and symptom tracking to monitor the progression of Parkinson’s Disease.
|
378 |
Accelerating Conceptual Design Analysis of Marine Vehicles through Deep LearningJones, Matthew Cecil 02 May 2019 (has links)
Evaluation of the flow field imparted by a marine vehicle reveals the underlying efficiency and performance. However, the relationship between precise design features and their impact on the flow field is not well characterized. The goal of this work is first, to investigate the thermally-stratified near field of a self-propelled marine vehicle to identify the significance of propulsion and hull-form design decisions, and second, to develop a functional mapping between an arbitrary vehicle design and its associated flow field to accelerate the design analysis process. The unsteady Reynolds-Averaged Navier-Stokes equations are solved to compute near-field wake profiles, showing good agreement to experimental data and providing a balance between simulation fidelity and numerical cost, given the database of cases considered. Machine learning through convolutional networks is employed to discover the relationship between vehicle geometries and their associated flow fields with two distinct deep-learning networks. The first network directly maps explicitly-specified geometric design parameters to their corresponding flow fields. The second network considers the vehicle geometries themselves as tensors of geometric volume fractions to implicitly-learn the underlying parameter space. Once trained, both networks effectively generate realistic flow fields, accelerating the design analysis from a process that takes days to one that takes a fraction of a second. The implicit-parameter network successfully learns the underlying parameter space for geometries within the scope of the training data, showing comparable performance to the explicit-parameter network. With additions to the size and variability of the training database, this network has the potential to abstractly generalize the design space for arbitrary geometric inputs, even those beyond the scope of the training data. / Doctor of Philosophy / Evaluation of the flow field of a marine vehicle reveals the underlying performance, however, the exact relationship between design features and their impact on the flow field is not well established. The goal of this work is first, to investigate the flow surrounding a self–propelled marine vehicle to identify the significance of various design decisions, and second, to develop a functional relationship between an arbitrary vehicle design and its flow field, thereby accelerating the design analysis process. Near–field wake profiles are computed through simulation, showing good agreement to experimental data. Machine learning is employed to discover the relationship between vehicle geometries and their associated flow fields with two distinct approaches. The first approach directly maps explicitly–specified geometric design parameters to their corresponding flow fields. The second approach considers the vehicle geometries themselves to implicitly–learn the underlying relationships. Once trained, both approaches generate a realistic flow field corresponding to a user–provided vehicle geometry, accelerating the design analysis from a multi–day process to one that takes a fraction of a second. The implicit–parameter approach successfully learns from the underlying geometric features, showing comparable performance to the explicit–parameter approach. With a larger and more–diverse training database, this network has the potential to abstractly learn the design space relationships for arbitrary marine vehicle geometries, even those beyond the scope of the training database.
|
379 |
CloudCV: Deep Learning and Computer Vision on the CloudAgrawal, Harsh 20 June 2016 (has links)
We are witnessing a proliferation of massive visual data. Visual content is arguably the fastest growing data on the web. Photo-sharing websites like Flickr and Facebook now host more than 6 and 90 billion photos, respectively. Unfortunately, scaling existing computer vision algorithms to large datasets leaves researchers repeatedly solving the same algorithmic and infrastructural problems. Designing and implementing efficient and provably correct computer vision algorithms is extremely challenging. Researchers must repeatedly solve the same low-level problems: building and maintaining a cluster of machines, formulating each component of the computer vision pipeline, designing new deep learning layers, writing custom hardware wrappers, etc. This thesis introduces CloudCV, an ambitious system that contain algorithms for end-to-end processing of visual content.
The goal of the project is to democratize computer vision; one should not have to be a computer vision, big data and deep learning expert to have access to state-of-the-art distributed computer vision algorithms. We provide researchers, students and developers access to state-of-art distributed computer vision and deep learning algorithms as a cloud service through web interface and APIs. / Master of Science
|
380 |
Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT ReconstructionVekhande, Swapnil Sudhir 14 June 2019 (has links)
Computed Tomography (CT) finds applications across domains like medical diagnosis, security screening, and scientific research. In medical imaging, CT allows physicians to diagnose injuries and disease more quickly and accurately than other imaging techniques. However, CT is one of the most significant contributors of radiation dose to the general population and the required radiation dose for scanning could lead to cancer. On the other hand, a shallow radiation dose could sacrifice image quality causing misdiagnosis. To reduce the radiation dose, sparse-view CT, which includes capturing a smaller number of projections, becomes a promising alternative. However, the image reconstructed from linearly interpolated views possesses severe artifacts.
Recently, Deep Learning-based methods are increasingly being used to interpret the missing data by learning the nature of the image formation process. The current methods are promising but operate mostly in the image domain presumably due to lack of projection data. Another limitation is the use of simulated data with less sparsity (up to 75%). This research aims to interpolate the missing sparse-view CT in the sinogram domain using deep learning. To this end, a residual U-Net architecture has been trained with patch-wise projection data to minimize Euclidean distance between the ground truth and the interpolated sinogram. The model can generate highly sparse missing projection data. The results show improvement in SSIM and RMSE by 14% and 52% respectively with respect to the linear interpolation-based methods. Thus, experimental sparse-view CT data with 90% sparsity has been successfully interpolated while improving CT image quality. / Master of Science / Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.
|
Page generated in 0.1099 seconds