Spelling suggestions: "subject:"multimodal"" "subject:"multinodal""
141 |
Three problems in computer vision: design, fabrication and analysis of paper sensors for detecting food contaminants, segmentation of food crystal images, and zero-shot action recognition in video sequences.Qiyue Liang (19349125) 09 August 2024 (has links)
<p dir="ltr">This dissertation delves into three projects within the realms of image processing, computer vision, and machine/deep learning. The primary objective of the first project is the detection of heavy metal particle concentrations using microfluidic paper-based devices. The second project revolves around the analysis of crystals within microscopic images. The third project centers around zero-shot action recognition in video sequences, utilizing a multi-modality deep learning framework that is refined through prompt tuning to enhance its performance.</p>
|
142 |
Driver Behavior Anomaly Recognition by Enhanced Contrastive Learning FrameworkAayush Rajesh Mailarpwar (20353431) 10 January 2025 (has links)
<p dir="ltr">Distracted driving is at the forefront of the leading causes of road accidents. Therefore, research advancements in Driver Monitoring Systems (DMS) are vital in facilitating prevention techniques. These systems must be able to detect anomalous driving behavior by evaluating deviations from some predefined normal driving behavior. This thesis proposes an improved contrastive learning approach that introduces a hybrid loss function combining triplet loss and supervised contrastive loss, as well as improvements to the projection head of the framework. It progresses the architecture by performing a multi-threshold severity calculation and data processing using an exponential moving average technique. Due to the unbounded possibilities of anomalous driving behaviors, the proposed framework was tested on the Driver Anomaly Detection (DAD) dataset that incorporates multi-modal and multi-view inputs in an open set recognition setting. The test set of the DAD dataset has anomalous actions that are unseen by the trained model; therefore, high precision on such a dataset demonstrates success on any other closed-set recognition task. The proposed framework achieved an impressive accuracy, reaching 94.14\%, AUC-ROC at 0.9787, and AUC-PR at 0.9781 on the test set. These findings contribute to in-vehicle monitoring by providing a scalable and adaptable framework suitable for real-world conditions.</p>
|
143 |
Representation Learning for Biomedical Text MiningSänger, Mario 10 January 2025 (has links)
Die Untersuchung von Beziehungen zwischen biomedizinischen Entitäten bildet einen Eckpfeiler der modernen Medizin. Angesichts der rasanten Zunahme der Forschungsliteratur wird es jedoch zunehmend schwieriger, durch bloßes Lesen umfassende Informationen über bestimmte Entitäten und deren Beziehungen zu gewinnen. Text-Mining Ansätze versuchen, die Verarbeitung dieser riesigen Datenmengen mit Hilfe von Maschinellen Lernen zu erleichtern. Wir tragen zu dieser Forschung bei indem wir Methoden zum Erlernen von Entitäts- und Textrepräsentationen auf Basis großer Publikations- und Wissensdatenbanken entwickeln. Als erstes schlagen wir zwei neuartige Ansätze zur Relationsextraktion vor, die Techniken des Representation Learnings nutzen, um umfassende Modelle biomedizinischer Entitäten und Entitätspaaren zu lernen. Diese Modelle lernen Vektorrepräsentationen, indem sie alle PubMed-Artikel berücksichtigen, die eine bestimmte Entität oder ein Entitätspaar erwähnen. Wir verwenden diese Vektoren als Eingabe für ein neuronales Netzwerk, um Relationen global zu klassifizieren, d. h. die Vorhersagen basieren auf dem gesamten Korpus und nicht auf einzelnen Sätzen oder Artikeln wie in konventionellen Ansätzen. In unserem zweiten Beitrag untersuchen wir die Auswirkungen multimodaler Entitätsinformationen auf die Vorhersage von Relationen mithilfe von Knowledge Graph Embedding Methoden. In unserer Studie erweitern wir bestehende Modelle, indem wir Wissensgraphen mit multimodalen Informationen anreichern. Ferner schlagen wir ein allgemeines Framework für die Integration dieser Informationen in den Lernprozess für Entitätsrepräsentationen vor. In unserem dritten Beitrag erweitern wir Sprachmodelle mit zusätzlichen Entitätsinformationen für die Identifikation von Relationen in Texten. Wir führen eine umfangreiche Evaluation durch, welche die Leistung solcher Modelle in mehreren Szenarien erfasst und damit eine umfassende, jedoch bisher fehlende, Bewertung solcher Modelle liefert. / With the rapid growth of biomedical literature, obtaining comprehensive information regarding particular biomedical entities and relations by only reading is becoming increasingly difficult. Text mining approaches seek to facilitate processing these vast amounts of text using machine learning. This renders effective and efficient encoding of all relevant information regarding specific entities as one central challenge in these approaches. In this thesis, we contribute to this research by developing machine learning methods for learning entity and text representations based on large-scale publication repositories and diverse information from in-domain knowledge bases. First, we propose two novel relation extraction approaches that use representation learning techniques to create comprehensive models of entities or entity pairs. These models learn low-dimensional embeddings by considering all publications from PubMed mentioning a specific entity or pair of entities. We use these embeddings as input for a neural network to classify relations globally, i.e., predictions are based on the entire corpus, not on single sentences or articles as in prior art. In our second contribution, we investigate the impact of multi-modal entity information for biomedical link prediction using knowledge graph embedding methods (KGEM). Our study enhances existing KGEMs by augmenting biomedical knowledge graphs with multi-modal entity information from in-domain databases. We propose a general framework for integrating this information into the KGEM entity representation learning process. In our third contribution, we augment pre-trained language models (PLM) with additional context information to identify interactions described in scientific texts. We perform an extensive benchmark that assesses the performance of such models across a wide range of biomedical relation scenarios, providing a comprehensive, but so far missing, evaluation of knowledge-augmented PLM-based extraction models.
|
144 |
Spectroscopie bimodale en diffusion élastique et autofluorescence résolue spatialement : instrumentation, modélisation des interactions lumière-tissus et application à la caractérisation de tissus biologiques ex vivo et in vivo pour la détection de cancers / Bimodal spectroscopy in elastic scattering and autofluorescence spatially resolved : instrumentation, light-tissue interactions modeling and application to the characterization of biological tissues ex vivo and in vivo for the detection of cancerPéry, Emilie 31 October 2007 (has links)
AL’objectif de ce travail de recherche est le développement, la mise au point et la validation d’une méthode de spectroscopie multi-modalités en diffusion élastique et autofluorescence pour caractériser des tissus biologiques in vitro et in vivo. Ces travaux s’organisent en quatre axes. La première partie des travaux présente l’instrumentation : développement, réalisation et caractérisation expérimentale d’un système de spectrométrie bimodale multi-points fibrée permettant l’acquisition de spectres in vivo (distances variables, acquisition rapide). La deuxième partie porte sur la modélisation des propriétés optiques du tissu : développement et validation expérimentale sur fantômes d’un algorithme de simulation de propagation de photons en milieux turbides et multi-fluorescents. La troisième partie propose une étude expérimentale conduite ex vivo sur des anneaux artériels frais et cryoconservés. Elle confirme la complémentarité des mesures spectroscopiques en diffusion élastique et autofluorescence et valide la méthode de spectroscopie multi-modalités et l’algorithme de simulation de propagation de photons. Les résultats originaux obtenus montrent une corrélation entre propriétés rhéologiques et optiques. La quatrième partie développe une seconde étude expérimentale in vivo sur un modèle pré-clinique tumoral de vessie. Elle met en évidence une différence significative en réflectance diffuse et/ou en autofluorescence et/ou en fluorescence intrinsèque entre tissus sains, inflammatoires et tumoraux, sur la base de longueurs d’onde particulières. Les résultats de la classification non supervisée réalisée montrent que la combinaison de différentes approches spectroscopiques augmente la fiabilité du diagnostic. / This research activity aims at developing and validating a multimodal spectroscopy method in elastic scattering and autofluorescence to characterize biological tissues in vitro and in vivo. It is articulated in four axes. At first, instrumentation is considered with the development, the engineering and the experimental characterization of a fibers bimodal, multi-points spectrometry system allowing the acquisition of spectra in vivo (variable distances, fast acquisition). Secondly, the optical properties of tissues are modelled with the development and the experimental validation on phantoms of a photons propagation simulation algorithm in turbids media and multi-fluorescent. Thirdly, an experimental study has been conducted ex vivo on fresh and cryopreserved arterial rings. It confirms the complementarity of spectroscopic measurements in elastic scattering and autofluorescence, and validates the method of multi-modality spectroscopy and the simulation of photons propagation algorithm. Results have well proved a correlation between rheological and optical properties. Finally, one second experimental study in vivo related to a pre-clinical tumoral model of bladder has been carried out. It highlights a significant difference in diffuse reflectance and/or autofluorescence and/or intrinsic fluorescence between healthy, inflammatory and tumoral tissues, on the basis of specific wavelength. The results of not supervised classification show that the combination of various spectroscopic approaches increases the reliability of the diagnosis.
|
145 |
Pre-Clinical Multi-Modal Imaging for Assessment of Pulmonary Structure, Function and PathologyNamati, Eman, eman@namati.com January 2008 (has links)
In this thesis, we describe several imaging techniques specifically designed and developed for the assessment of pulmonary structure, function and pathology. We then describe the application of this technology within appropriate biological systems, including the identification, tracking and assessment of lung tumors in a mouse model of lung cancer.
The design and development of a Large Image Microscope Array (LIMA), an integrated whole organ serial sectioning and imaging system, is described with emphasis on whole lung tissue. This system provides a means for acquiring 3D pathology of fixed whole lung specimens with no infiltrative embedment medium using a purpose-built vibratome and imaging system. This system enables spatial correspondence between histology and non-invasive imaging modalities such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET), providing precise correlation of the underlying 'ground truth' pathology back to the in vivo imaging data. The LIMA system is evaluated using fixed lung specimens from sheep and mice, resulting in large, high-quality pathology datasets that are accurately registered to their respective CT and H&E histology.
The implementation of an in vivo micro-CT imaging system in the context of pulmonary imaging is described. Several techniques are initially developed to reduce artifacts commonly associated with commercial micro-CT systems, including geometric gantry calibration, ring artifact reduction and beam hardening correction. A computer controlled Intermittent Iso-pressure Breath Hold (IIBH) ventilation system is then developed for reduction of respiratory motion artifacts in live, breathing mice. A study validating the repeatability of extracting valuable pulmonary metrics using this technique against standard respiratory gating techniques is then presented.
The development of an ex vivo laser scanning confocal microscopy (LSCM) and an in vivo catheter based confocal microscopy (CBCM) pulmonary imaging technique is described. Direct high-resolution imaging of sub-pleural alveoli is presented and an alveolar mechanic study is undertaken. Through direct quantitative assessment of alveoli during inflation and deflation, recruitment and de-recruitment of alveoli is quantitatively measured. Based on the empirical data obtained in this study, a new theory on alveolar mechanics is proposed.
Finally, a longitudinal mouse lung cancer study utilizing the imaging techniques described and developed throughout this thesis is presented. Lung tumors are identified, tracked and analyzed over a 6-month period using a combination of micro-CT, micro-PET, micro-MRI, LSCM, CBCM, LIMA and H&E histology imaging. The growth rate of individual tumors is measured using the micro-CT data and traced back to the histology using the LIMA system. A significant difference in tumor growth rates within mice is observed, including slow growing, regressive, disappearing and aggressive tumors, while no difference between the phenotype of tumors was found from the H&E histology. Micro-PET and micro-MRI imaging was conducted at the 6-month time point and revealed the limitation of these systems for detection of small lesions ( < 2mm) in this mouse model of lung cancer. The CBCM imaging provided the first high-resolution live pathology of this mouse model of lung cancer and revealed distinct differences between normal, suspicious and tumor regions. In addition, a difference was found between control A/J mice parenchyma and Urethane A/J mice normal parenchyma, suggesting a 'field effect' as a result of the Urethane administration and/or tumor burden. In conclusion, a comprehensive murine lung cancer imaging study was undertaken, and new information regarding the progression of tumors over time has been revealed.
|
146 |
Ultrasonic Arrays for Sensing and Beamforming of Lamb WavesEngholm, Marcus January 2010 (has links)
Non-destructive testing (NDT) techniques are critical to ensure integrity and safety of engineered structures. Structural health monitoring (SHM) is considered as the next step in the field enabling continuous monitoring of structures. The first part of the thesis concerns NDT and SHM using guided waves in plates, or Lamb waves, to perform imaging of plate structures. The imaging is performed using a fixed active array setup covering a larger area of a plate. Current methods are based on conventional beamforming techniques that do not efficiently exploit the available data from the small arrays used for the purpose. In this thesis an adaptive signal processing approach based on the minimum variance distortionless response (MVDR) method is proposed to mitigate issues related to guided waves, such as dispersion and the presence of multiple propagating modes. Other benefits of the method include a significant increase in resolution. Simulation and experimental results show that the method outperforms current standard processing techniques. The second part of the thesis addresses transducer design issues for resonant ultrasound inspections. Resonant ultrasound methods utilize the shape and frequency of the object's natural modes of vibration to detect anomalies. The method considered in the thesis uses transducers that are acoustically coupled to the inspected structures. Changes in the transducer's electrical impedance are used to detect defects. The sensitivity that can be expected from such a setup is shown to highly depend on the transducer resonance frequency, as well as the working frequency of the instrument. Through simulations and a theoretical argumentation, optimal conditions to achieve high sensitivity are given.
|
147 |
Typografins tolkning : En undersökning av typografins betydelse vid tolkning av textToreheim, Mimmi January 2011 (has links)
This paper addresses the question about what role typography plays in the interpretation of a text. From three different handbooks in typography arguments are gathered and categorized in to three categories: roman types, san serif and others. Interviews with people from the graphic design area are also a part of the paper and are accounted for in the discussion part of the paper. Areas of theory are a broad hermeneutic view based on Hans-Georg Gadamers thoughts, which have sub categories such as: Michel Foucault’s theory about discourses, John Swales genre theory and Anders Björkvall’s thoughts on typography and multi-modal texts. The result of the paper is that all typography, even the one often called the invisible typography, is interpreted by the reader who gathers it’s pre-knowledge from genre, history, culture and discourses. This means that typography plays an important role in the interpretation of a text. Key words: typography, interpretation, hermeneutic, Hans- Georg Gadamer, discourse, Michel Foucault, genre analysis, John Swales, Multi- modal, Anders Björkvall, semiotic. / Denna uppsats behandlar frågan om vilken roll typografin spelar för tolkningen av en text. Från tre olika handböcker i typografi samlas argument in och kategoriseras i tre kategorier: antikva, sanserif och övriga. Även intervjuer med personer yrkesverksamma i det grafiska fältet genomförs och redovisas sedan i diskussionen. Teoretisk utgångspunkt hämtas från Hans-Georg Gadamers tankar om hermeneutik, på vilken följande underkategorier av teorier följer: Michel Foucaults diskursteori, John Swales genreteori och Anders Björkvalls tankar om typografi och multimodala texter. Resultatet pekar mot att all typografi, även den som ofta kallas för den osynliga typografin, tolkas av mottagaren som i sin tur samlat sina förkunskaper från genre, historia, kultur och diskurs. Detta innebär att typografi spelar en viktig roll i tolkningen av en text. Nyckelord: typografi, tolkning, hermeneutik, Hans-Georg Gadamer, diskurs, Michel Foucault, genreanalys, Johan Swales, multimodalitet, Anders Björkvall, semiotik.
|
148 |
Hybrid 2D and 3D face verificationMcCool, Christopher Steven January 2007 (has links)
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
|
149 |
Modélisation pour la reconnaissance continue de la langue française parlée complétée à l'aide de méthodes avancées d'apprentissage automatique / Modeling for Continuous Cued Speech Recognition in French using Advanced Machine Learning MethodsLiu, Li 11 September 2018 (has links)
Cette thèse de doctorat traite de la reconnaissance automatique du Langage français Parlé Complété (LPC), version française du Cued Speech (CS), à partir de l’image vidéo et sans marquage de l’information préalable à l’enregistrement vidéo. Afin de réaliser cet objectif, nous cherchons à extraire les caractéristiques de haut niveau de trois flux d’information (lèvres, positions de la main et formes), et fusionner ces trois modalités dans une approche optimale pour un système de reconnaissance de LPC robuste. Dans ce travail, nous avons introduit une méthode d’apprentissage profond avec les réseaux neurono convolutifs (CNN)pour extraire les formes de main et de lèvres à partir d’images brutes. Un modèle de mélange de fond adaptatif (ABMM) est proposé pour obtenir la position de la main. De plus, deux nouvelles méthodes nommées Modified Constraint Local Neural Fields (CLNF Modifié) et le model Adaptive Ellipse Model ont été proposées pour extraire les paramètres du contour interne des lèvres (étirement et ouverture aux lèvres). Le premier s’appuie sur une méthode avancée d’apprentissage automatique (CLNF) en vision par ordinateur. Toutes ces méthodes constituent des contributions significatives pour l’extraction de caractéristiques du LPC. En outre, en raison de l’asynchronie des trois flux caractéristiques du LPC, leur fusion est un enjeu important dans cette thèse. Afin de le résoudre, nous avons proposé plusieurs approches, y compris les stratégies de fusion au niveau données et modèle avec une modélisation HMM dépendant du contexte. Pour obtenir le décodage, nous avons proposé trois architectures CNNs-HMMs. Toutes ces architectures sont évaluées sur un corpus de phrases codées en LPC en parole continue sans aucun artifice, et la performance de reconnaissance CS confirme l’efficacité de nos méthodes proposées. Le résultat est comparable à l’état de l’art qui utilisait des bases de données où l’information pertinente était préalablement repérée. En même temps, nous avons réalisé une étude spécifique concernant l’organisation temporelle des mouvements de la main, révélant une avance de la main en relation avec l’emplacement dans la phrase. En résumé, ce travail de doctorat propose les méthodes avancées d’apprentissage automatique issues du domaine de la vision par ordinateur et les méthodologies d’apprentissage en profondeur dans le travail de reconnaissance CS, qui constituent un pas important vers le problème général de conversion automatique de CS en parole audio. / This PhD thesis deals with the automatic continuous Cued Speech (CS) recognition basedon the images of subjects without marking any artificial landmark. In order to realize thisobjective, we extract high level features of three information flows (lips, hand positions andshapes), and find an optimal approach to merging them for a robust CS recognition system.We first introduce a novel and powerful deep learning method based on the ConvolutionalNeural Networks (CNNs) for extracting the hand shape/lips features from raw images. Theadaptive background mixture models (ABMMs) are also applied to obtain the hand positionfeatures for the first time. Meanwhile, based on an advanced machine learning method Modi-fied Constrained Local Neural Fields (CLNF), we propose the Modified CLNF to extract theinner lips parameters (A and B ), as well as another method named adaptive ellipse model. Allthese methods make significant contributions to the feature extraction in CS. Then, due tothe asynchrony problem of three feature flows (i.e., lips, hand shape and hand position) in CS,the fusion of them is a challenging issue. In order to resolve it, we propose several approachesincluding feature-level and model-level fusion strategies combined with the context-dependentHMM. To achieve the CS recognition, we propose three tandem CNNs-HMM architectureswith different fusion types. All these architectures are evaluated on the corpus without anyartifice, and the CS recognition performance confirms the efficiency of our proposed methods.The result is comparable with the state of the art using the corpus with artifices. In parallel,we investigate a specific study about the temporal organization of hand movements in CS,especially about its temporal segmentation, and the evaluations confirm the superior perfor-mance of our methods. In summary, this PhD thesis applies the advanced machine learningmethods to computer vision, and the deep learning methodologies to CS recognition work,which make a significant step to the general automatic conversion problem of CS to sound.The future work will mainly focus on an end-to-end CNN-RNN system which incorporates alanguage model, and an attention mechanism for the multi-modal fusion.
|
150 |
Development of Next Generation Image Reconstruction Algorithms for Diffuse Optical and Photoacoustic TomographyJaya Prakash, * January 2014 (has links) (PDF)
Biomedical optical imaging is capable of providing functional information of the soft bi-ological tissues, whose applications include imaging large tissues, such breastand brain in-vivo. Biomedical optical imaging uses near infrared light (600nm-900nm) as the probing media, givin ganaddedadvantageofbeingnon-ionizingimagingmodality. The tomographic technologies for imaging large tissues encompasses diffuse optical tomogra-phyandphotoacoustictomography.
Traditional image reconstruction methods indiffuse optical tomographyemploysa
�2-norm based regularization, which is known to remove high frequency no is either econstructed images and make the mappearsmooth. Hence as parsity based image reconstruction has been deployed for diffuse optical tomography, these sparserecov-ery methods utilize the �p-norm based regularization in the estimation problem with 0≤ p<1. These sparse recovery methods, along with an approximation to utilizethe �0-norm, have been used forther econstruction of diffus eopticaltomographic images.The comparison of these methods was performed by increasing the sparsityinthesolu-tion.
Further a model resolution matrix based framework was proposed and shown to in-duceblurinthe�2-norm based regularization framework for diffuse optical tomography. This model-resolution matrix framework was utilized in the optical imaged econvolution framework. A basis pursuitdeconvolution based on Split AugmentedLagrangianShrink-ageAlgorithm(SALSA)algorithm was used along with the Tikhonovregularization step making the image reconstruction into a two-step procedure. This new two-step approach was found to be robust with no iseandwasabletobetterdelineatethestructureswhichwasevaluatedusingnumericalandgelatinphantom experiments.
Modern diffuse optical imaging systems are multi-modalin nature, where diffuse optical imaging is combined with traditional imaging modalitiessuc has Magnetic Res-onanceImaging(MRI),or Computed Tomography(CT). Image-guided diffuse optical tomography has the advantage of reducingthetota lnumber of optical parameters beingreconstructedtothenumber of distinct tissue types identified by the traditional imaging modality, converting the optical image-reconstruction problem fromunder-determined innaturetoover-determined. In such cases, the minimum required measurements might be farless compared to those of the traditional diffuse optical imaging. An approach to choose these measurements optimally based on a data-resolution matrix is proposed, and it is shown that it drastically reduces the minimum required measurements (typicalcaseof240to6) without compromising the image reconstruction performance.
In the last part of the work , a model-based image reconstruction approaches in pho-toacoustic tomography (which combines light and ultra sound) arestudied as it is know that these methods have a distinct advantage compared to traditionalanalytical methods in limited datacase. These model-based methods deployTikhonovbasedregularizationschemetoreconstruct the initial pressure from the boundary acoustic data. Again a model-resolution for these cases tend to represent the blurinduced by the regularization scheme. A method that utilizes this blurringmodelandper forms the basis pursuit econ-volution to improve the quantitative accuracy of the reconstructed photoacoustic image is proposed and shown to be superior compared to other traditional methods. Moreover, this deconvolution including the building of model-resolution matrixis achievedvia the Lanczosbidiagonalization (least-squares QR) making this approach computationally ef-ficient and deployable inreal-time.
Keywords
Medical imaging, biomedical optical imaging, diffuse optical tomography, photoacous-tictomography, multi-modalimaging, inverse problems,sparse recovery,computational methods inbiomedical optical imaging.
|
Page generated in 0.0251 seconds