• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Etude de l’influence de l’entrée artérielle tumorale par modélisation numérique et in vitro en imagerie de contraste ultrasonore. : application clinique pour l’évaluation des thérapies ciblées en cancérologie / In vitro assessment of the arterial input function influence on dynamic contrast-enhanced ultrasonography microvascularization parameter measurements using numerical modeling. : clinical impact on treatment evaluations in oncology

Gauthier, Marianne 05 December 2011 (has links)
L’échographie dynamique de contraste (DCE-US) est actuellement proposée comme technique d’imagerie fonctionnelle permettant d’évaluer les nouvelles thérapies anti-angiogéniques. Dans ce contexte, L'UPRES EA 4040, Université Paris-Sud 11, et le service d'Echographie de l'Institut Gustave Roussy ont développé une méthodologie permettant de calculer automatiquement, à partir de la courbe de prise de contraste moyenne obtenue dans la tumeur après injection en bolus d’un agent de contraste, un ensemble de paramètres semi-quantitatifs. Actuellement, l’état hémodynamique du patient ou encore les conditions d’injection du produit de contraste ne sont pas pris en compte dans le calcul de ces paramètres à l’inverse d’autres modalités (imagerie par résonance magnétique dynamique de contraste ou scanner de perfusion). L’objectif de cette thèse était donc d’étendre la méthode de déconvolution utilisée en routine dans les autres modalités d’imagerie à l’échographie de contraste. Celle-ci permet de s’affranchir des conditions citées précédemment en déconvoluant la courbe de prise de contraste issue de la tumeur par la fonction d’entrée artérielle, donnant ainsi accès aux paramètres quantitatifs flux sanguin, volume sanguin et temps de transit moyen. Mon travail de recherche s’est alors articulé autour de trois axes. Le premier visait à développer la méthode de quantification par déconvolution dédiée à l’échographie de contraste, avec l’élaboration d’un outil méthodologique suivie de l’évaluation de son apport sur la variabilité des paramètres de la microvascularisation. Des évaluations comparatives de variabilité intra-opérateur ont alors mis en évidence une diminution drastique des coefficients de variation des paramètres de la microvascularisation de 30% à 13% avec la méthode de déconvolution. Le deuxième axe était centré sur l’étude des sources de variabilité influençant les paramètres de la microvascularisation portant à la fois sur les conditions expérimentales et sur les conditions physiologiques de la tumeur. Enfin, le dernier axe a reposé sur une étude rétrospective menée sur 12 patients pour lesquels nous avons évalué l’intérêt de la déconvolution en comparant l’évolution des paramètres quantitatifs et semi-quantitatifs de la microvascularisation en fonction des réponses des tumeurs obtenues par les critères RECIST à partir d’un scan effectué à 2 mois. Cette méthodologie est prometteuse et peut permettre à terme une évaluation plus robuste et précoce des thérapies anti-angiogéniques que les méthodologies actuellement utilisées en routine dans le cadre des examens DCE-US. / Dynamic contrast-enhanced ultrasonography (DCE-US) is currently used as a functional imaging technique for evaluating anti-angiogenic therapies. A mathematical model has been developed by the UPRES EA 4040, Paris-Sud university and the Gustave Roussy Institute to evaluate semi-quantitative microvascularization parameters directly from time-intensity curves. But DCE-US evaluation of such parameters does not yet take into account physiological variations of the patient or even the way the contrast agent is injected as opposed to other functional modalities (dynamic magnetic resonance imaging or perfusion scintigraphy). The aim of my PhD was to develop a deconvolution process dedicated to the DCE-US imaging, which is currently used as a routine method in other imaging modalities. Such a process would allow access to quantitatively-defined microvascularization parameters since it would provide absolute evaluation of the tumor blood flow, the tumor blood volume and the mean transit time. This PhD has been led according to three main goals. First, we developed a deconvolution method involving the creation of a quantification tool and validation through studies of the microvascularization parameter variability. Evaluation and comparison of intra-operator variabilities demonstrated a decrease in the coefficients of variation from 30% to 13% when microvascularization parameters were extracted using the deconvolution process. Secondly, we evaluated sources of variation that influence microvascularization parameters concerning both the experimental conditions and the physiological conditions of the tumor. Finally, we performed a retrospective study involving 12 patients for whom we evaluated the benefit of the deconvolution process: we compared the evolution of the quantitative and semi-quantitative microvascularization parameters based on tumor responses evaluated by the RECIST criteria obtained through a scan performed after 2 months. Deconvolution is a promising process that may allow an earlier, more robust evaluation of anti-angiogenic treatments than the DCE-US method in current clinical use.
222

Simulation aux Grandes Echelles et chimie complexe pour la modélisation de la structure chimique des flammes turbulentes / Large Eddy Simulations and complex chemistry for modeling the chemical structure of turbulent flames

Mehl, Cédric 12 June 2018 (has links)
La Simulation aux Grandes Echelles (SGE) est appliquée à des brûleurs industriels pour prédire de nombreux phénomènes physiques complexes, tel que l’allumage ou la formation de polluants. La prise en compte de réactions chimiques détaillées est alors indispensable pour obtenir des résultats précis. L’amélioration des moyens de calculs permet de réaliser des simulations de brûleurs avec une chimie de plus en plus détaillée. La principale problématique est le couplage entre les réactions chimiques et l’écoulement turbulent. Bien que la dynamique de flamme soit souvent bien reproduite avec les modèles actuels, la prédiction de phénomènes complexes comme la formation de polluants reste une tâche difficile. En particulier, des études ont montré que l’influence du plissement de sous-maille sur la structure chimique des flammes n’était pas prise en compte de manière précise. Deux modèles basés sur le filtrage explicite des fronts de flammes sont étudiés dans cette thèse afin d’améliorer la prédiction de polluants en combustion turbulente prémélangée : (i) le premier modèle met en jeu une méthode de déconvolution des variables filtrées ; (ii) le second modèle implique l’optimisation de la chimie pour obtenir des flammes turbulentes filtrées. L’objectif de la thèse est d’obtenir une prédiction précise des polluants à coût de calcul réduit. / Large Eddy Simulation (LES) is applied to industrial burners to predict a wide range of complex physical phenomena, such as flame ignition and pollutants formation. The prediction accuracy is tightly linked to the ability to describe in detail the chemical reactions and thus the flame chemical structure. With the improvement of computational clusters, the simulation of industrial burners with detailed chemistry becomes possible. A major issue is then to couple detailed chemical mechanisms to turbulent flows. While the flame dynamics is often correctly simulated with stateof- the-art models, the prediction of complex phenomena such as pollutants formation remains a difficult task. Several investigations show that, in many models, the impact of flame subgrid scale wrinkling on the chemical flame structure is not accurately taken into account. Two models based on explicit flame front filtering are explored in this thesis to improve pollutants formation in turbulent premixed combustion: (i) a model based on deconvolution of filtered scalars; (ii) a model involving the optimization of chemistry to reproduce filtered turbulent flames. The objective of the work is to achieve high accuracy in pollutants formation prediction at low computational costs.
223

Diagnostic de Faisceau par "Laser-Wire" et Mesure Rapide du Spectre de Luminosite au Collisionneur Lineaire International

Poirier, Freddy 06 September 2005 (has links) (PDF)
The International Linear Collider (ILC), a high precision electron-positron machine with centre of mass energy extending up to the TeV scale, is curently being proposed and designed. With unprecedented small beam size and high intensity the ILC aims at luminosities of the order of $10^(34)$~cm$^(-2)$s$^(-1)$. Careful monitoring of the beam parameters that affect the luminosity will be mandatory if these ambitious goals are to be achieved.<br /><br />One of the key parameters is beam emittance, the optimisation of which requires beam size monitors with micron resolution.<br />With this aim, a non-invasive laser-wire monitor prototype was designed, installed and run at the PETRA ring. Prior to its installation, background simulations and measurements were performed to verify that they would be low enough to allow<br /> the laser-wire programme to proceed. A lead-tungstate crystal calorimeter for the laser-wire was commissioned, including a study of temperature dependance, geometrical acceptance and energy response. The first laser-wire measurements of the PETRA positron beam size were then performed. The system, calibration and results are reported here.<br /><br />At the ILC, beam energy spread and beamstrahlung effects modify the luminosity spectrum. Determination of these effects is crucial in order to extract precision physics from threshold scans. In order to provide a run-time diagnostic scheme to address this, a fast luminosity spectrum measurement technique employing forward calorimetry and statistical unfolding was devised, using the Bhabha process at low angles. The scheme is described and first results are presented.
224

3D imaging using time-correlated single photon counting

Neimert-Andersson, Thomas January 2010 (has links)
<p>This project investigates a laser radar system. The system is based on the principles of time-correlated single photon counting, and by measuring the times-of-flight of reflected photons it can find range profiles and perform three-dimensional imaging of scenes. Because of the photon counting technique the resolution and precision that the system can achieve is very high compared to analog systems. These properties make the system interesting for many military applications. For example, the system can be used to interrogate non-cooperative targets at a safe distance in order to gather intelligence. However, signal processing is needed in order to extract the information from the data acquired by the system. This project focuses on the analysis of different signal processing methods.</p><p>The Wiener filter and the Richardson-Lucy algorithm are used to deconvolve the data acquired by the photon counting system. In order to find the positions of potential targets different approaches of non-linear least squares methods are tested, as well as a more unconventional method called ESPRIT. The methods are evaluated based on their ability to resolve two targets separated by some known distance and the accuracy with which they calculate the position of a single target, as well as their robustness to noise and their computational burden.</p><p>Results show that fitting a curve made of a linear combination of asymmetric super-Gaussians to the data by a method of non-linear least squares manages to accurately resolve targets separated by 1.75 cm, which is the best result of all the methods tested. The accuracy for finding the position of a single target is similar between the methods but ESPRIT has a much faster computation time.</p>
225

A Signal Processing Approach to Practical Neurophysiology : A Search for Improved Methods in Clinical Routine and Research

Hammarberg, Björn January 2002 (has links)
<p>Signal processing within the neurophysiological field is challenging and requires short processing time and reliable results. In this thesis, three main problems are considered.</p><p>First, a modified line source model for simulation of muscle action potentials (APs) is presented. It is formulated in continuous-time as a convolution of a muscle-fiber dependent transmembrane current and an electrode dependent weighting (impedance) function. In the discretization of the model, the Nyquist criterion is addressed. By applying anti-aliasing filtering, it is possible to decrease the discretization frequency while retaining the accuracy. Finite length muscle fibers are incorporated in the model through a simple transformation of the weighting function. The presented model is suitable for modeling large motor units.</p><p>Second, the possibility of discerning the individual AP components of the concentric needle electromyogram (EMG) is explored. Simulated motor unit APs (MUAPs) are prefiltered using Wiener filtering. The mean fiber concentration (MFC) and jitter are estimated from the prefiltered MUAPs. The results indicate that the assessment of the MFC may well benefit from the presented approach and that the jitter may be estimated from the concentric needle EMG with an accuracy comparable with traditional single fiber EMG.</p><p>Third, automatic, rather than manual, detection and discrimination of recorded C-fiber APs is addressed. The algorithm, detects the Aps reliably using a matched filter. Then, the detected APs are discriminated using multiple hypothesis tracking combined with Kalman filtering which identifies the APs originating from the same C-fiber. To improve the performance, an amplitude estimate is incorporated into the tracking algorithm. Several years of use show that the performance of the algorithm is excellent with minimal need for audit.</p>
226

A Signal Processing Approach to Practical Neurophysiology : A Search for Improved Methods in Clinical Routine and Research

Hammarberg, Björn January 2002 (has links)
Signal processing within the neurophysiological field is challenging and requires short processing time and reliable results. In this thesis, three main problems are considered. First, a modified line source model for simulation of muscle action potentials (APs) is presented. It is formulated in continuous-time as a convolution of a muscle-fiber dependent transmembrane current and an electrode dependent weighting (impedance) function. In the discretization of the model, the Nyquist criterion is addressed. By applying anti-aliasing filtering, it is possible to decrease the discretization frequency while retaining the accuracy. Finite length muscle fibers are incorporated in the model through a simple transformation of the weighting function. The presented model is suitable for modeling large motor units. Second, the possibility of discerning the individual AP components of the concentric needle electromyogram (EMG) is explored. Simulated motor unit APs (MUAPs) are prefiltered using Wiener filtering. The mean fiber concentration (MFC) and jitter are estimated from the prefiltered MUAPs. The results indicate that the assessment of the MFC may well benefit from the presented approach and that the jitter may be estimated from the concentric needle EMG with an accuracy comparable with traditional single fiber EMG. Third, automatic, rather than manual, detection and discrimination of recorded C-fiber APs is addressed. The algorithm, detects the Aps reliably using a matched filter. Then, the detected APs are discriminated using multiple hypothesis tracking combined with Kalman filtering which identifies the APs originating from the same C-fiber. To improve the performance, an amplitude estimate is incorporated into the tracking algorithm. Several years of use show that the performance of the algorithm is excellent with minimal need for audit.
227

3D imaging using time-correlated single photon counting

Neimert-Andersson, Thomas January 2010 (has links)
This project investigates a laser radar system. The system is based on the principles of time-correlated single photon counting, and by measuring the times-of-flight of reflected photons it can find range profiles and perform three-dimensional imaging of scenes. Because of the photon counting technique the resolution and precision that the system can achieve is very high compared to analog systems. These properties make the system interesting for many military applications. For example, the system can be used to interrogate non-cooperative targets at a safe distance in order to gather intelligence. However, signal processing is needed in order to extract the information from the data acquired by the system. This project focuses on the analysis of different signal processing methods. The Wiener filter and the Richardson-Lucy algorithm are used to deconvolve the data acquired by the photon counting system. In order to find the positions of potential targets different approaches of non-linear least squares methods are tested, as well as a more unconventional method called ESPRIT. The methods are evaluated based on their ability to resolve two targets separated by some known distance and the accuracy with which they calculate the position of a single target, as well as their robustness to noise and their computational burden. Results show that fitting a curve made of a linear combination of asymmetric super-Gaussians to the data by a method of non-linear least squares manages to accurately resolve targets separated by 1.75 cm, which is the best result of all the methods tested. The accuracy for finding the position of a single target is similar between the methods but ESPRIT has a much faster computation time.
228

New Techniques for Estimation of Source Parameters : Applications to Airborne Gravity and Pseudo-Gravity Gradient Tensors

Beiki, Majid January 2011 (has links)
Gravity gradient tensor (GGT) data contains the second derivatives of the Earth’s gravitational potential in three orthogonal directions. GGT data can be measured either using land, airborne, marine or space platforms. In the last two decades, the applications of GGT data in hydrocarbon exploration, mineral exploration and structural geology have increased considerably. This work focuses on developing new interpretation techniques for GGT data as well as pseudo-gravity gradient tensor (PGGT) derived from measured magnetic field. The applications of developed methods are demonstrated on a GGT data set from the Vredefort impact structure, South Africa and a magnetic data set from the Särna area, west central Sweden. The eigenvectors of the symmetric GGT can be used to estimate the position of the causative body as well as its strike direction. For a given measurement point, the eigenvector corresponding to the maximum eigenvalue points approximately toward the center of mass of the source body. For quasi 2D structures, the strike direction of the source can be estimated from the direction of the eigenvectors corresponding to the smallest eigenvalues. The same properties of GGT are valid for the pseudo-gravity gradient tensor (PGGT) derived from magnetic field data assuming that the magnetization direction is known. The analytic signal concept is applied to GGT data in three dimensions. Three analytic signal functions are introduced along x-, y- and z-directions which are called directional analytic signals. The directional analytic signals are homogenous and satisfy Euler’s homogeneity equation. Euler deconvolution of directional analytic signals can be used to locate causative bodies. The structural index of the gravity field is automatically identified from solving three Euler equations derived from the GGT for a set of data points located within a square window with adjustable size. For 2D causative bodies with geometry striking in the y-direction, the measured gxz and gzz components of GGT can be jointly inverted for estimating the parameters of infinite dike and geological contact models. Once the strike direction of 2D causative body is estimated, the measured components can be transformed into the strike coordinate system. The GGT data within a set of square windows for both infinite dike and geological contact models are deconvolved and the best model is chosen based on the smallest data fit error. / Felaktigt tryckt som Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 730
229

Improvement of signal analysis for the ultrasonic microscopy / Verbesserung der Signalauswertung für die Ultraschallmikroskopie

Gust, Norbert 30 June 2011 (has links) (PDF)
This dissertation describes the improvement of signal analysis in ultrasonic microscopy for nondestructive testing. Specimens with many thin layers, like modern electronic components, pose a particular challenge for identifying and localizing defects. In this thesis, new evaluation algorithms have been developed which enable analysis of highly complex layer-stacks. This is achieved by a specific evaluation of multiple reflections, a newly developed iterative reconstruction and deconvolution algorithm, and the use of classification algorithms with a highly optimized simulation algorithm. Deep delaminations inside a 19-layer component can now not only be detected, but also localized. The new analysis methods also enable precise determination of elastic material parameters, sound velocities, thicknesses, and densities of multiple layers. The highly improved precision of determined reflections parameters with deconvolution also provides better and more conclusive results with common analysis methods. / Die vorgelegte Dissertation befasst sich mit der Verbesserung der Signalauswertung für die Ultraschallmikroskopie in der zerstörungsfreien Prüfung. Insbesondere bei Proben mit vielen dünnen Schichten, wie bei modernen Halbleiterbauelementen, ist das Auffinden und die Bestimmung der Lage von Fehlstellen eine große Herausforderung. In dieser Arbeit wurden neue Auswertealgorithmen entwickelt, die eine Analyse hochkomplexer Schichtabfolgen ermöglichen. Erreicht wird dies durch die gezielte Auswertung von Mehrfachreflexionen, einen neu entwickelten iterativen Rekonstruktions- und Entfaltungsalgorithmus und die Nutzung von Klassifikationsalgorithmen im Zusammenspiel mit einem hoch optimierten neu entwickelten Simulationsalgorithmus. Dadurch ist es erstmals möglich, tief liegende Delaminationen in einem 19-schichtigem Halbleiterbauelement nicht nur zu detektieren, sondern auch zu lokalisieren. Die neuen Analysemethoden ermöglichen des Weiteren eine genaue Bestimmung von elastischen Materialparametern, Schallgeschwindigkeiten, Dicken und Dichten mehrschichtiger Proben. Durch die stark verbesserte Genauigkeit der Reflexionsparameterbestimmung mittels Signalentfaltung lassen sich auch mit klassischen Analysemethoden deutlich bessere und aussagekräftigere Ergebnisse erzielen. Aus den Erkenntnissen dieser Dissertation wurde ein Ultraschall-Analyseprogramm entwickelt, das diese komplexen Funktionen auf einer gut bedienbaren Oberfläche bereitstellt und bereits praktisch genutzt wird.
230

Reconstruction of enhanced ultrasound images from compressed measurements / Reconstruction d'images ultrasonores déconvoluées à partir de données compressées

Chen, Zhouye 21 October 2016 (has links)
L'intérêt de l'échantillonnage compressé dans l'imagerie ultrasonore a été récemment évalué largement par plusieurs équipes de recherche. Suite aux différentes configurations d'application, il a été démontré que les données RF peuvent être reconstituées à partir d'un faible nombre de mesures et / ou en utilisant un nombre réduit d'émission d'impulsions ultrasonores. Selon le modèle de l'échantillonnage compressé, la résolution des images ultrasonores reconstruites à partir des mesures compressées dépend principalement de trois aspects: la configuration d'acquisition, c.à.d. l'incohérence de la matrice d'échantillonnage, la régularisation de l'image, c.à.d. l'a priori de parcimonie et la technique d'optimisation. Nous nous sommes concentrés principalement sur les deux derniers aspects dans cette thèse. Néanmoins, la résolution spatiale d'image RF, le contraste et le rapport signal sur bruit dépendent de la bande passante limitée du transducteur d'imagerie et du phénomène physique lié à la propagation des ondes ultrasonores. Pour surmonter ces limitations, plusieurs techniques de traitement d'image en fonction de déconvolution ont été proposées pour améliorer les images ultrasonores. Dans cette thèse, nous proposons d'abord un nouveau cadre de travail pour l'imagerie ultrasonore, nommé déconvolution compressée, pour combiner l'échantillonnage compressé et la déconvolution. Exploitant une formulation unifiée du modèle d'acquisition directe, combinant des projections aléatoires et une convolution 2D avec une réponse impulsionnelle spatialement invariante, l'avantage de ce cadre de travail est la réduction du volume de données et l'amélioration de la qualité de l'image. Une méthode d'optimisation basée sur l'algorithme des directions alternées est ensuite proposée pour inverser le modèle linéaire, en incluant deux termes de régularisation exprimant la parcimonie des images RF dans une base donnée et l'hypothèse statistique gaussienne généralisée sur les fonctions de réflectivité des tissus. Nous améliorons les résultats ensuite par la méthode basée sur l'algorithme des directions simultanées. Les deux algorithmes sont évalués sur des données simulées et des données in vivo. Avec les techniques de régularisation, une nouvelle approche basée sur la minimisation alternée est finalement développée pour estimer conjointement les fonctions de réflectivité des tissus et la réponse impulsionnelle. Une investigation préliminaire est effectuée sur des données simulées. / The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. According to the model of compressive sampling, the resolution of reconstructed ultrasound images from compressed measurements mainly depends on three aspects: the acquisition setup, i.e. the incoherence of the sampling matrix, the image regularization, i.e. the sparsity prior, and the optimization technique. We mainly focused on the last two aspects in this thesis. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to Ultrasound wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this thesis, we first propose a novel framework for Ultrasound imaging, named compressive deconvolution, to combine the compressive sampling and deconvolution. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of this framework is the joint data volume reduction and image quality improvement. An optimization method based on the Alternating Direction Method of Multipliers is then proposed to invert the linear model, including two regularization terms expressing the sparsity of the RF images in a given basis and the generalized Gaussian statistical assumption on tissue reflectivity functions. It is improved afterwards by the method based on the Simultaneous Direction Method of Multipliers. Both algorithms are evaluated on simulated and in vivo data. With regularization techniques, a novel approach based on Alternating Minimization is finally developed to jointly estimate the tissue reflectivity function and the point spread function. A preliminary investigation is made on simulated data.

Page generated in 0.1013 seconds