• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Škálování arteriální vstupní funkce v DCE-MRI / Scaling of arterial input function in DCE-MRI

Holeček, Tomáš Unknown Date (has links)
Perfusion magnetic resonance imaging is modern diagnostic method used mainly in oncology. In this method, contrast agent is injected to the subject and then is continuously monitored the progress of its concentration in the affected area in time. Correct determination of the arterial input function (AIF) is very important for perfusion analysis. One possibility is to model AIF by multichannel blind deconvolution but the estimated AIF is necessary to be scaled. This master´s thesis is focused on description of scaling methods and their influence on perfussion parameters in dependence on used model of AIF in different tissues.
212

Segmentation and Deconvolution of Fluorescence Microscopy Volumes

Soonam Lee (6738881) 14 August 2019 (has links)
<div>Recent advances in optical microscopy have enabled biologists collect fluorescence microscopy volumes cellular and subcellular structures of living tissue. This results in collecting large datasets of microscopy volume and needs image processing aided automated quantification method. To quantify biological structures a first and fundamental step is segmentation. Yet, the quantitative analysis of the microscopy volume is hampered by light diffraction, distortion created by lens aberrations in different directions, complex variation of biological structures. This thesis describes several proposed segmentation methods to identify various biological structures such as nuclei or tubules observed in fluorescence microscopy volumes. To achieve nuclei segmentation, multiscale edge detection method and 3D active contours with inhomogeneity correction method are used for segmenting nuclei. Our proposed 3D active contours with inhomogeneity correction method utilizes 3D microscopy volume information while addressing intensity inhomogeneity across vertical and horizontal directions. To achieve tubules segmentation, ellipse model fitting to tubule boundary method and convolutional neural networks with inhomogeneity correction method are performed. More specifically, ellipse fitting method utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting steps to delineate tubular objects. Also, the deep learning based method combines intensity inhomogeneity correction, data augmentation, followed by convolutional neural networks architecture. Moreover, this thesis demonstrates a new deconvolution method to improve microscopy image quality without knowing the 3D point spread function using a spatially constrained cycle-consistent adversarial networks. The results of proposed methods are visually and numerically compared with other methods. Experimental results demonstrate that our proposed methods achieve better performance than other methods for nuclei/tubules segmentation as well as deconvolution.</div>
213

Systeme d'imagerie hybride par codage de pupille / Hybrid imaging system with wavefront coding

Diaz, Frédéric 06 May 2011 (has links)
De nouveaux concepts d’imagerie permettent aux systèmes optiques d’être plus compacts et plus performants. Parmi ces nouvelles techniques, les systèmes d’imagerie hybrides par codage de pupille allient un système optique comprenant un masque de phase et un traitement numérique. La fonction de phase implantée sur le masque rend l’image insensible à un défaut du système optique, qui peut être une aberration ou de la défocalisation. Cet avantage est obtenu au prix d’une déformation connue de l’image qui est ensuite corrigée par un traitement numérique.L’étude des propriétés de ces systèmes a été effectuée en cherchant à augmenter la profondeur de champ d’un système d’imagerie. Un gain sur ce paramètre permet déjà d’envisager le relâchement de contraintes de conception optique telles que la courbure de champ, la défocalisation thermique, le chromatisme… Dans ces techniques d’imagerie, la prise en compte du bruit du capteur constitue l’un des paramètres critiques pour le choix et l’utilisation de méthodes de traitement d’image.Les travaux menés durant cette thèse ont permis de proposer une approche originale de conception conjointe de la fonction de phase du masque et de l’algorithme de restauration d’image. Celle-ci est basée sur un critère de rapport signal à bruit de l’image finale. Contrairement aux approches connues, ce critère montre qu’il n’est pas nécessaire d’obtenir une stricte invariance de la fonction de transfert du système optique. Les paramètres des fonctions de phase optimisés grâce à ce critère sont sensiblement différents de ceux usuellement proposés et conduisent à une amélioration significative de la qualité de l’image.Cette approche de conception optique a été validée expérimentalement sur une caméra thermique non refroidie. Un masque de phase binaire qui a été mis en œuvre en association avec un traitement numérique temps réel implémenté sur une carte GPU a permis d’augmenter la profondeur de champ de cette caméra d’un facteur 3. Compte-tenu du niveau de bruit important introduit par l’utilisation d’un capteur bolométrique, la bonne qualité des images obtenues après traitement démontre l’intérêt de l’approche de conception conjointe appliquée à l’imagerie hybride par codage de pupille. / New imaging techniques allow better and smaller systems. Among these new techniques, hybrid imaging systems with wavefront coding includes an optical system with a phase mask and a processing step. The phase function of the mask makes the system insensitive to a fault of the optical system, such as an aberration or a defocus. The price of this advantage is a deformation of the image acquired by a sensor, which is then processed. The study of the properties of these hybrid imaging systems has been completed by increasing the depth of field of an imaging system, which allows to relax some design constraints such as field curvature, thermal defocus, chromaticism… In these imaging techniques, the consideration the noise of the sensor is one the critical parameters when choosing the image processing method.The work performed during this thesis allowed to proposed an original approach for the cross-conception of the phase function of the mask and the processing step. This approach is based on a signal-to-noise criterion. Unlike known approaches, this criterion shows that a strict insensitivity of the modulation transfer function of the optics is not required. The parameters of the phase functions optimized thanks to this criterion are noticeably different from those usually proposed and lead to a significant increase of the image quality.This cross-conception approach has been validated experimentally on an uncooled thermal camera. A binary phase mask associated with a real-time processing implemented on a GPU allowed to increase the depth of field of this camera by a factor 3. Considering the important level of noise introduced by the use of a bolometric sensor, the good quality of the processed image shows the interest of the cross-conception for hybrid imaging system with wavefront coding.
214

Approche bayésienne pour la localisation de sources en imagerie acoustique / Bayesian approach in acoustic source localization and imaging

Chu, Ning 22 November 2013 (has links)
L’imagerie acoustique est une technique performante pour la localisation et la reconstruction de puissance des sources acoustiques en utilisant des mesures limitées au réseau des microphones. Elle est largement utilisée pour évaluer l’influence acoustique dans l’industrie automobile et aéronautique. Les méthodes d’imagerie acoustique impliquent souvent un modèle direct de propagation acoustique et l’inversion de ce modèle direct. Cependant, cette inversion provoque généralement un problème inverse mal-posé. Par conséquent, les méthodes classiques ne permettent d’obtenir de manière satisfaisante ni une haute résolution spatiale, ni une dynamique large de la puissance acoustique. Dans cette thèse, nous avons tout d’abord nous avons créé un modèle direct discret de la puissance acoustique qui devient alors à la fois linéaire et déterminé pour les puissances acoustiques. Et nous ajoutons les erreurs de mesures que nous décomposons en trois parties : le bruit de fond du réseau de capteurs, l’incertitude du modèle causée par les propagations à multi-trajets et les erreurs d’approximation de la modélisation. Pour la résolution du problème inverse, nous avons tout d’abord proposé une approche d’hyper-résolution en utilisant une contrainte de parcimonie, de sorte que nous pouvons obtenir une plus haute résolution spatiale robuste à aux erreurs de mesures à condition que le paramètre de parcimonie soit estimé attentivement. Ensuite, afin d’obtenir une dynamique large et une plus forte robustesse aux bruits, nous avons proposé une approche basée sur une inférence bayésienne avec un a priori parcimonieux. Toutes les variables et paramètres inconnus peuvent être estimées par l’estimation du maximum a posteriori conjoint (JMAP). Toutefois, le JMAP souffrant d’une optimisation non-quadratique d’importants coûts de calcul, nous avons cherché des solutions d’accélération algorithmique: une approximation du modèle direct en utilisant une convolution 2D avec un noyau invariant. Grâce à ce modèle, nos approches peuvent être parallélisées sur des Graphics Processing Unit (GPU) . Par ailleurs, nous avons affiné notre modèle statistique sur 2 aspects : prise en compte de la non stationarité spatiale des erreurs de mesures et la définition d’une loi a priori pour les puissances renforçant la parcimonie en loi de Students-t. Enfin, nous ont poussé à mettre en place une Approximation Variationnelle Bayésienne (VBA). Cette approche permet non seulement d’obtenir toutes les estimations des inconnues, mais aussi de fournir des intervalles de confiance grâce aux paramètres cachés utilisés par les lois de Students-t. Pour conclure, nos approches ont été comparées avec des méthodes de l’état-de-l’art sur des données simulées, réelles (provenant d’essais en soufflerie chez Renault S2A) et hybrides. / Acoustic imaging is an advanced technique for acoustic source localization and power reconstruction using limited measurements at microphone sensor array. This technique can provide meaningful insights into performances, properties and mechanisms of acoustic sources. It has been widely used for evaluating the acoustic influence in automobile and aircraft industries. Acoustic imaging methods often involve in two aspects: a forward model of acoustic signal (power) propagation, and its inverse solution. However, the inversion usually causes a very ill-posed inverse problem, whose solution is not unique and is quite sensitive to measurement errors. Therefore, classical methods cannot easily obtain high spatial resolutions between two close sources, nor achieve wide dynamic range of acoustic source powers. In this thesis, we firstly build up a discrete forward model of acoustic signal propagation. This signal model is a linear but under-determined system of equations linking the measured data and unknown source signals. Based on this signal model, we set up a discrete forward model of acoustic power propagation. This power model is both linear and determined for source powers. In the forward models, we consider the measurement errors to be mainly composed of background noises at sensor array, model uncertainty caused by multi-path propagation, as well as model approximating errors. For the inverse problem of the acoustic power model, we firstly propose a robust super-resolution approach with the sparsity constraint, so that we can obtain very high spatial resolution in strong measurement errors. But the sparsity parameter should be carefully estimated for effective performance. Then for the acoustic imaging with large dynamic range and robustness, we propose a robust Bayesian inference approach with a sparsity enforcing prior: the double exponential law. This sparse prior can better embody the sparsity characteristic of source distribution than the sparsity constraint. All the unknown variables and parameters can be alternatively estimated by the Joint Maximum A Posterior (JMAP) estimation. However, this JMAP suffers a non-quadratic optimization and causes huge computational cost. So that we improve two following aspects: In order to accelerate the JMAP estimation, we investigate an invariant 2D convolution operator to approximate acoustic power propagation model. Owing to this invariant convolution model, our approaches can be parallelly implemented by the Graphics Processing Unit (GPU). Furthermore, we consider that measurement errors are spatially variant (non-stationary) at different sensors. In this more practical case, the distribution of measurement errors can be more accurately modeled by Students-t law which can express the variant variances by hidden parameters. Moreover, the sparsity enforcing distribution can be more conveniently described by the Student's-t law which can be decomposed into multivariate Gaussian and Gamma laws. However, the JMAP estimation risks to obtain so many unknown variables and hidden parameters. Therefore, we apply the Variational Bayesian Approximation (VBA) to overcome the JMAP drawbacks. One of the fabulous advantages of VBA is that it can not only achieve the parameter estimations, but also offer the confidential interval of interested parameters thanks to hidden parameters used in Students-t priors. To conclude, proposed approaches are validated by simulations, real data from wind tunnel experiments of Renault S2A, as well as the hybrid data. Compared with some typical state-of-the-art methods, the main advantages of proposed approaches are robust to measurement errors, super spatial resolutions, wide dynamic range and no need for source number nor Signal to Noise Ration (SNR) beforehand.
215

Nonparametric adaptive estimation for discretely observed Lévy processes

Kappus, Julia Johanna 30 October 2012 (has links)
Die vorliegende Arbeit hat nichtparametrische Schätzmethoden für diskret beobachtete Lévyprozesse zum Gegenstand. Ein Lévyprozess mit endlichen zweiten Momenten und endlicher Variation auf Kompakta wird niederfrequent beobachtet. Die Sprungdynamik wird vollständig durch das endliche signierte Maß my(dx):= x ny(dx) beschrieben. Ein lineares Funktional von my soll nichtparametrisch geschätzt werden. Im ersten Teil werden Kernschätzer konstruiert und obere Schranken für das korrespondierende Risiko bewiesen. Daraus werden Konvergenzraten unter Glattheitsannahmen an das Lévymaß hergeleitet. Für Spezialfälle werden untere Schranken bewiesen und daraus Minimax-Optimalität gefolgert. Der Schwerpunkt liegt auf dem Problem der datengetriebenen Wahl des Glättungsparameters, das im zweiten Teil untersucht wird. Da die nichtparametrische Schätzung für Lévyprozesse starke strukturelle Ähnlichkeiten mit Dichtedekonvolutionsproblemen mit unbekannter Fehlerdichte aufweist, werden beide Problemstellungen parallel diskutiert und die Methoden allgemein sowohl für Lévyprozesse als auch für Dichtedekonvolution entwickelt. Es werden Methoden der Modellwahl durch Penalisierung angewandt. Während das Prinzip der Modellwahl im üblichen Fall darauf beruht, dass die Fluktuation stochastischer Terme durch Penalisierung mit einer deterministischen Größe beschränkt werden kann, ist die Varianz im hier betrachteten Fall unbekannt und der Strafterm somit stochastisch. Das Hauptaugenmerk der Arbeit liegt darauf, Strategien zum Umgang mit dem stochastischen Strafterm zu entwickeln. Dabei ist ein modifizierter Schätzer für die charakteristische Funktion im Nenner zentral, der es erlaubt, die punktweise Kontrolle der Abweichung dieses Objects von seiner Zielgröße auf die gesamte reelle Achse zu erweitern. Für die Beweistechnik sind insbesondere Talagrand-Konzentrationsungleichungen für empirische Prozesse relevant. / This thesis deals with nonparametric estimation methods for discretely observed Lévy processes. A Lévy process X having finite variation on compact sets and finite second moments is observed at low frequency. The jump dynamics is fully described by the finite signed measure my(dx)=x ny(dx). The goal is to estimate, nonparametrically, some linear functional of my. In the first part, kernel estimators are constructed and upper bounds on the corresponding risk are provided. From this, rates of convergence are derived, under regularity assumptions on the Lévy measure. For particular cases, minimax lower bounds are proved. The rates of convergence are thus shown to be minimax optimal. The focus lies on the data driven choice of the smoothing parameter, which is being considered in the second part. Since nonparametric estimation methods for Lévy processes have strong structural similarities with with nonparametric density deconvolution with unknown error density, both fields are discussed in parallel and the concepts are developed in generality, for Lévy processes as well as for density deconvolution. The choice of the bandwidth is realized, using techniques of model selection via penalization. The principle of model selection via penalization usually relies on the fact that the fluctuation of certain stochastic quantities can be controlled by penalizing with a deterministic term. Contrarily to this, the variance is unknown in the setting investigated here and the penalty term is hence itself a stochastic quantity. It is the main concern of this thesis to develop strategies to dealing with the stochastic penalty term. The most important step in this direction will be a modified estimator of the unknown characteristic function in the denominator, which allows to make the pointwise control of this object uniform on the real line. The main technical tools involved in the arguments are concentration inequalities of Talagrand type for empirical processes.
216

Adaptive and efficient quantile estimation

Trabs, Mathias 07 July 2014 (has links)
Die Schätzung von Quantilen und verwandten Funktionalen wird in zwei inversen Problemen behandelt: dem klassischen Dekonvolutionsmodell sowie dem Lévy-Modell in dem ein Lévy-Prozess beobachtet wird und Funktionale des Sprungmaßes geschätzt werden. Im einem abstrakteren Rahmen wird semiparametrische Effizienz im Sinne von Hájek-Le Cam für Funktionalschätzung in regulären, inversen Modellen untersucht. Ein allgemeiner Faltungssatz wird bewiesen, der auf eine große Klasse von statistischen inversen Problem anwendbar ist. Im Dekonvolutionsmodell beweisen wir, dass die Plugin-Schätzer der Verteilungsfunktion und der Quantile effizient sind. Auf der Grundlage von niederfrequenten diskreten Beobachtungen des Lévy-Prozesses wird im nichtlinearen Lévy-Modell eine Informationsschranke für die Schätzung von Funktionalen des Sprungmaßes hergeleitet. Die enge Verbindung zwischen dem Dekonvolutionsmodell und dem Lévy-Modell wird präzise beschrieben. Quantilschätzung für Dekonvolutionsprobleme wird umfassend untersucht. Insbesondere wird der realistischere Fall von unbekannten Fehlerverteilungen behandelt. Wir zeigen unter minimalen und natürlichen Bedingungen, dass die Plugin-Methode minimax optimal ist. Eine datengetriebene Bandweitenwahl erlaubt eine optimale adaptive Schätzung. Quantile werden auf den Fall von Lévy-Maßen, die nicht notwendiger Weise endlich sind, verallgemeinert. Mittels äquidistanten, diskreten Beobachtungen des Prozesses werden nichtparametrische Schätzer der verallgemeinerten Quantile konstruiert und minimax optimale Konvergenzraten hergeleitet. Als motivierendes Beispiel von inversen Problemen untersuchen wir ein Finanzmodell empirisch, in dem ein Anlagengegenstand durch einen exponentiellen Lévy-Prozess dargestellt wird. Die Quantilschätzer werden auf dieses Modell übertragen und eine optimale adaptive Bandweitenwahl wird konstruiert. Die Schätzmethode wird schließlich auf reale Daten von DAX-Optionen angewendet. / The estimation of quantiles and realated functionals is studied in two inverse problems: the classical deconvolution model and the Lévy model, where a Lévy process is observed and where we aim for the estimation of functionals of the jump measure. From a more abstract perspective we study semiparametric efficiency in the sense of Hájek-Le Cam for functional estimation in regular indirect models. A general convolution theorem is proved which applies to a large class of statistical inverse problems. In particular, we consider the deconvolution model, where we prove that our plug-in estimators of the distribution function and of the quantiles are efficient. In the nonlinear Lévy model based on low-frequent discrete observations of the Lévy process, we deduce an information bound for the estimation of functionals of the jump measure. The strong relationship between the Lévy model and the deconvolution model is given a precise meaning. Quantile estimation in deconvolution problems is studied comprehensively. In particular, the more realistic setup of unknown error distributions is covered. Under minimal and natural conditions we show that the plug-in method is minimax optimal. A data-driven bandwidth choice yields optimal adaptive estimation. The concept of quantiles is generalized to the possibly infinite Lévy measures by considering left and right tail integrals. Based on equidistant discrete observations of the process, we construct a nonparametric estimator of the generalized quantiles and derive minimax convergence rates. As a motivating financial example for inverse problems, we empirically study the calibration of an exponential Lévy model for asset prices. The estimators of the generalized quantiles are adapted to this model. We construct an optimal adaptive quantile estimator and apply the procedure to real data of DAX-options.
217

Central limit theorems and confidence sets in the calibration of Lévy models and in deconvolution

Söhl, Jakob 03 May 2013 (has links)
Zentrale Grenzwertsätze und Konfidenzmengen werden in zwei verschiedenen, nichtparametrischen, inversen Problemen ähnlicher Struktur untersucht, und zwar in der Kalibrierung eines exponentiellen Lévy-Modells und im Dekonvolutionsmodell. Im ersten Modell wird eine Geldanlage durch einen exponentiellen Lévy-Prozess dargestellt, Optionspreise werden beobachtet und das charakteristische Tripel des Lévy-Prozesses wird geschätzt. Wir zeigen, dass die Schätzer fast sicher wohldefiniert sind. Zu diesem Zweck beweisen wir eine obere Schranke für Trefferwahrscheinlichkeiten von gaußschen Zufallsfeldern und wenden diese auf einen Gauß-Prozess aus der Schätzmethode für Lévy-Modelle an. Wir beweisen gemeinsame asymptotische Normalität für die Schätzer von Volatilität, Drift und Intensität und für die punktweisen Schätzer der Sprungdichte. Basierend auf diesen Ergebnissen konstruieren wir Konfidenzintervalle und -mengen für die Schätzer. Wir zeigen, dass sich die Konfidenzintervalle in Simulationen gut verhalten, und wenden sie auf Optionsdaten des DAX an. Im Dekonvolutionsmodell beobachten wir unabhängige, identisch verteilte Zufallsvariablen mit additiven Fehlern und schätzen lineare Funktionale der Dichte der Zufallsvariablen. Wir betrachten Dekonvolutionsmodelle mit gewöhnlich glatten Fehlern. Bei diesen ist die Schlechtgestelltheit des Problems durch die polynomielle Abfallrate der charakteristischen Funktion der Fehler gegeben. Wir beweisen einen gleichmäßigen zentralen Grenzwertsatz für Schätzer von Translationsklassen linearer Funktionale, der die Schätzung der Verteilungsfunktion als Spezialfall enthält. Unsere Ergebnisse gelten in Situationen, in denen eine Wurzel-n-Rate erreicht werden kann, genauer gesagt gelten sie, wenn die Sobolev-Glattheit der Funktionale größer als die Schlechtgestelltheit des Problems ist. / Central limit theorems and confidence sets are studied in two different but related nonparametric inverse problems, namely in the calibration of an exponential Lévy model and in the deconvolution model. In the first set-up, an asset is modeled by an exponential of a Lévy process, option prices are observed and the characteristic triplet of the Lévy process is estimated. We show that the estimators are almost surely well-defined. To this end, we prove an upper bound for hitting probabilities of Gaussian random fields and apply this to a Gaussian process related to the estimation method for Lévy models. We prove joint asymptotic normality for estimators of the volatility, the drift, the intensity and for pointwise estimators of the jump density. Based on these results, we construct confidence intervals and sets for the estimators. We show that the confidence intervals perform well in simulations and apply them to option data of the German DAX index. In the deconvolution model, we observe independent, identically distributed random variables with additive errors and we estimate linear functionals of the density of the random variables. We consider deconvolution models with ordinary smooth errors. Then the ill-posedness of the problem is given by the polynomial decay rate with which the characteristic function of the errors decays. We prove a uniform central limit theorem for the estimators of translation classes of linear functionals, which includes the estimation of the distribution function as a special case. Our results hold in situations, for which a square-root-n-rate can be obtained, more precisely, if the Sobolev smoothness of the functionals is larger than the ill-posedness of the problem.
218

Imagens de fontes magnéticas usando um sistema multicanal de sensores magneto-resistivos / Magnetic Source images using a Magnetoresistive Sensors Multichannel System

Cruz, Juan Alberto Leyva 03 November 2005 (has links)
Apresenta-se o desenho, construção e caracterização de uma plataforma experimental para a obtenção de imagens magnéticas bidimensionais (2D) geradas pela distribuição não uniforme em gel de vaselina de micro-partículas magnéticas (magnetita- Fe3O4), acomodadas em fantomas magnéticos de geometrias irregulares. A instrumentação é basicamente formada por um arranjo multicanal de 12-sensores magnetorresistivos de última geração (modelo HMC 1001/1002 da Honeywell), os quais convertem os sinais magnéticos, a serem medidas, em voltagens diferenciais, que posteriormente passam-se pela etapa de condicionamento analógico multisinais, e adquiridos por uma placa de aquisição PCI de 16 canais simples, e geradas pelas fontes magnéticas (fantomas) as quais eram posicionadas acima de uma tabua porta-fantoma a qual era acionada por um sistema de posicionamento x-y, utilizando-se dois motores de passo controlados via porta paralela. A obtenção e processamento das imagens de forma automática foi levado acabo por médio da ferramenta computacional SmaGimFM v1.0 (grupo de scripts escritos pelo autor, em LABVIEW v8.1 e Matlab v7.3). A montagem experimental foi desenhada para realizar o scan numa área de ate (20x18) cm2. O sistema consegue medir campos na ordem de poucos nano-teslas (10-9 T). Foi demostrado experimentalmente que: a detectibilidade do sistema está na ordem de 100 pT/?Hz; a resolução, o menor valor da indução magnética detectada e a resolução espacial dos sensores foi aproximadamente de (3±1) nT e (3.0± 0.1) mm, respectivamente, este último obtido para uma distancia sensor-fonte média de (6.0± 0.1) mm. O nível de ruído ambiental médio foi corroborado experimentalmente no valor de 10 nT. O fator de Calibração para todos os sensores alimentados com 8V, foi aproximadamente de 10-6 T/V, confirmando o valor da sensibilidade nominal oferecida pelo vendedor no data-sheet dos sensores. Os multisinais sempre foram pré-processadas para a remoção dos offset, e posteriormente era realizadas uma interpolação bi-cúbica, para gerar imagens magnéticas com uma alta resolução espacial da ordem de (256x256) pixels. A funções de transferência da modulação e espalhamento pontual do sistema foram estudados e os sensores foram espaçados e fixados de acordo com os resultados destes estudos. Nesta tese todas as imagens cruas foram geradas pelo mapeamento da resposta do sistema multicanal de magnetômetros a pequenas distancias e geradas pela presença de micropartículas de magnetita (Fe3O4) não tratada termicamente e dispersada em oitos fantomas planares com geometrias complexas e chamados como: PhMão; PhNum; PhLines; PhCinco; PhTrês; PhCircle; PhQuadSmall e PhQuadBig. As imagens magnéticas de cada um destes fantomas é apresentada. A cada experimento, estes fantomas eram magnetizados pela ação de um pulso magnético uniforme no volume dos fantomas, com um valor aproximadamente de 81.6 mT, e produzido por um sistema de bobinas par de Helmholtz. Para fazer o registro experimental das imagens magnéticas, os fantomas foram posicionados a uma altura fixa em relação aos sensores, e movidos numa direção de scan, assim nos detectores observávamos as voltagens gerados pela variação no campo remanente devido às diferentes concentrações de micro-partículas magnéticas magnetizadas foram medidos e controlados por um computador pessoal. Usando as imagens cruas (imagens ruidosas e borradas) e outras informações a priori, foram obtidas as imagens reconstruídas das fontes do campo magnético, tais como, a distribuição de partículas ferrimagnéticas no interior dos fantomas, a qual é relacionada com a susceptibilidade magnética das amostras. Encontrar as imagens das fontes magnéticas, é resolver o problema magnético associado, e nosso trabalho estas restaurações foram realizadas usando-se os seguintes algoritmos numéricos de deconvolução, filtragem espacial de Wiener e Fourier, o filtragem Pseudo-inversa, o método do gradiente conjugado e os procedimentos de regularização de Tikhonov e Decomposição de Valores singulares truncados, dentre outros. Estes procedimentos foram implementados e testados. As imagens reconstruídas das fontes magnéticas de quatro fantomas são apresentadas. Estas técnicas foram programadas computacionalmente por médio de um conjunto de scripts chamados de SmaGimFM v1.0, estes foram escritos nos linguagens computacionais MATLAB® desde a MathWorks Inc.; e LABVIEW desde a National Instruments Inc. Estes resultados preliminares mostram que o sistema de imagens apresenta potencial para ser aplicada em estudos na área da Física Médica, onde imagens com moderada para alta resolução espacial e baixa amplitude da indução magnética são exigidas. Contudo, podemos afirmar que à distância sensor-fonte é crítica e afeta a resolução das imagens. O sistema é capaz de registrar imagens na ordem de 10-9 T, e sua elevada resolução espacial indica que pode ser testada como uma nova técnica biomagnética para gerar imagens em 2D de partículas magnéticas dentro de objetos, na região do campo próximo, para futuras aplicações médicas / We have designed and build a magnetic imaging system for obtaining experimental noisy and blurred magnetic images from distribution of ferromagnetic tracers (magnetite Fe3O4). The main part of the magnetic imaging system was formed by a linear array composed of 12-magnetoresistive sensors from Honeywell Inc. (HMC 1001). These sensors are microcircuits with a configuration of wheatstone-bridge and convert magnetic fields into differential voltage, which after pass for the multichannel signal stage can be to measure magnetic signals about of 10-9 T. The system is capable of scanning planar samples with dimensions up to (16x18) cm square. A full experimental characterization of the magnetic imaging system was carried out. The calibration factor for all sensor supplied by 8 V, was approximately 10-6 T/V, confirming the data sheet nominal properties from the vendor. The spatial resolution and the resolution of the magnetic imaging system were experimentally confirmed to be 3 mm and 3 nT, respectively. The spectral density noise was about , for the experimental conditions used in these studies. The signals were pre-processed for offset remove and the interpolation for spatial resolution improves and generates images of (256x256) pixels. The point spread and modulation transference functions of multi-sensor system were studied and the sensors were spaced accordingly. In this thesis, all raw images were generated by mapping the response of the magnetoresistive magnetometers multichannel array at short distances due to the presence of uncooked magnetite powder dispersed in eight planar phantoms with complex geometries and called as: PhMão; PhNum; PhLines; PhCinco; PhTrês; PhCircle; PhQuadSmall and PhQuadBig. These phantoms were magnetized by a uniform pulse field of approximately of 81.6 mT produced by a Helmholtz coil system. The samples were moved under the magnetoresistive sensors and the voltages generated by the variation in remanent magnetic field due to different magnetized ferromagnetic particles concentrations were recorded and controlled by a personal computer. Using the experimental noisy and blurred magnetic field images (raw images), and some another, a priori information\'s, the reconstruction of the magnetic field source images, such as, the distribution of ferromagnetic particles inner of the phantoms which are related with magnetic susceptibility, was obtained by various inverse problem solution algorithms\', such as, the spatial Wiener and Fourier filtering, the Pseudo-inverse filtering; the conjugated gradient and Tikhonov and Decomposition of Truncated Singular Values approaches and others. These procedures were implemented by mean of the scripts set called SmaGimFM v1.0, that we developed using the MATLAB® language from MathWorks Inc. A preliminary result shows that this magnetic imaging system join to some deconvolution technique can be considered efficient to be used in functional images of the gastrointestinal tract, where a moderate resolution is required. We can affirm that at a distance sensor-source choose is a critical parameter and affects the resolution of the images; and we can conclude that this magnetic images method can be successfully used to generate planar blurred magnetic images and magnetic field sources images in the near field region at macroscopic level generated by ferromagnetic materials.
219

Estudo do processamento de polietileno de ultra-alta massa molar(Peuamm)e polietileno glico (PEG) por moagem de alta energia

Gabriel, Melina Correa 29 March 2010 (has links)
Made available in DSpace on 2017-07-21T20:42:32Z (GMT). No. of bitstreams: 1 Melina Correa Gabriel.pdf: 5915390 bytes, checksum: bae67fca28fd7999823fa6ec6ac98844 (MD5) Previous issue date: 2010-03-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The intention of this exploratory research is to study the modifications provided by high-energy mechanical milling in ultra-high molecular weight polyethylene (UHMWPE) and mixtures of this polymer with polyethylene glycol (PEG). These modifications can be of interest for future processing of UHMWPE. The mechanical milling was performed in an attritor mill, a type of mill that can be used in laboratory as well as in industry. The millings of UHMWPE were performed in different rotation speeds. For mixtures of UHMWPE and PEG, the amounts of PEG were also different. The samples were characterized by scanning electron microscopy (SEM), atomic force microscopy (AFM), differential scanning calorimetry (DSC) and X-ray diffraction (XRD). The mechanical milling modified the UHMWPE particles morphology: with milling, the almost rounded shape became flat-like shape. This caused the reduction of apparent density of polymer after milling. The mechanical milling also provided structural changes. With the increasing of the rotation speed,there was the increasing of the monoclinic phase and the decreasing of the orthorhombic, up to 500 rpm. For 600 rpm, the amount of monoclinic phase decreased. In this rotation, the deformation rate probably increased the process temperature, allowing the monoclinic phase to return to its initial structural orthorhombic form. In mixtures of UHMWPE and PEG, after mechanical milling, the particles of PEG were probably reduced and better dispersed in the UHMWPE matrix. Changes in thermal characteristics of polymers also could be noted. The kinetics of UHMWPE crystal growth changed, as well as the behavior of PEG crystallization. Feasibly, dispersed particles of PEG acted as physical barriers against the crystalline phase growth of UHMWPE and the crystallization temperature of PEG decreased, when the UHMWPE and PEG mixtures were milled. / Este trabalho exploratório teve por objetivo estudar as modificações promovidas por moagem de alta energia no de polietileno de ultra-alta massa molar (PEUAMM) e sua mistura com polietileno glicol (PEG), que podem ser de interesse para auxiliar um posterior processamento do PEUAMM. As moagens foram realizadas em um moinho do tipo attritor, um tipo de moinho que pode ser usado tanto em laboratório quanto em escala industrial. Foram variadas as velocidades de rotação na moagem do PEUAMM, além das concentrações de PEG, quando feita a mistura. As amostras foram caracterizadas por microscopia eletrônica de varredura (MEV), microscopia de força atômica (MFA), calorimetria exploratória diferencial (DSC) e difração de raios X. A moagem de alta energia do material modificou a forma das partículas de PEUAMM, passando de arredondadas a flakes, com a evolução do processo de moagem, fazendo com que a densidade aparente do polímero diminuísse muito comparado ao polímero não moído. A moagem também proporcionou mudança estrutural, permitindo a formação de fase monoclínica em detrimento da ortorrômbica. A medida que se aumentou a rotação do moinho até 500 rpm, houve um crescimento da fase monoclínica. Apenas para 600 rpm, a quantidade dessa fase sofreu decréscimo, devido possivelmente ao aumento da frequência de choques e da temperatura de processamento, fazendo com que a estrutura monoclínica retornasse à estrutura ortorrômbica original. Na mistura de PEUAMM com PEG, a moagem provavelmente permitiu redução das partículas e a melhor dispersão de PEG na matriz de PEUAMM. Também se observaram mudanças nas características térmicas dos polímeros na mistura após moagem. Ocorreu mudança na cinética de crescimento dos cristais de PEUAMM e mudança no comportamento de cristalização do PEG, comportamento este que não ocorreu para o PEUAMM moído ou para a mistura de PEUAMM com PEG antes da moagem. Possivelmente, as partículas dispersas de PEG atuaram como barreiras ao crescimento da fase cristalina do PEUAMM e houve diminuição da temperatura de cristalização do PEG, na mistura com PEUAMM após moagem.
220

Moments method for random matrices with applications to wireless communication. / La méthode des moments pour les matrices aléatoires avec application à la communication sans fil

Masucci, Antonia Maria 29 November 2011 (has links)
Dans cette thèse, on étudie l'application de la méthode des moments pour les télécommunications. On analyse cette méthode et on montre son importance pour l'étude des matrices aléatoires. On utilise le cadre de probabilités libres pour analyser cette méthode. La notion de produit de convolution/déconvolution libre peut être utilisée pour prédire le spectre asymptotique de matrices aléatoires qui sont asymptotiquement libres. On montre que la méthode de moments est un outil puissant même pour calculer les moments/moments asymptotiques de matrices qui n'ont pas la propriété de liberté asymptotique. En particulier, on considère des matrices aléatoires gaussiennes de taille finie et des matrices de Vandermonde al ?eatoires. On développe en série entiére la distribution des valeurs propres de differents modèles, par exemple les distributions de Wishart non-centrale et aussi les distributions de Wishart avec des entrées corrélées de moyenne nulle. Le cadre d'inference pour les matrices des dimensions finies est suffisamment souple pour permettre des combinaisons de matrices aléatoires. Les résultats que nous présentons sont implémentés en code Matlab en générant des sous-ensembles, des permutations et des relations d'équivalence. On applique ce cadre à l'étude des réseaux cognitifs et des réseaux à forte mobilité. On analyse les moments de matrices de Vandermonde aléatoires avec des entrées sur le cercle unitaire. On utilise ces moments et les détecteurs à expansion polynomiale pour décrire des détecteurs à faible complexité du signal transmis par des utilisateurs mobiles à une station de base (ou avec deux stations de base) représentée par des réseaux linéaires uniformes. / In this thesis, we focus on the analysis of the moments method, showing its importance in the application of random matrices to wireless communication. This study is conducted in the free probability framework. The concept of free convolution/deconvolution can be used to predict the spectrum of sums or products of random matrices which are asymptotically free. In this framework, we show that the moments method is very appealing and powerful in order to derive the moments/asymptotic moments for cases when the property of asymptotic freeness does not hold. In particular, we focus on Gaussian random matrices with finite dimensions and structured matrices as Vandermonde matrices. We derive the explicit series expansion of the eigenvalue distribution of various models, as noncentral Wishart distributions, as well as correlated zero mean Wishart distributions. We describe an inference framework so flexible that it is possible to apply it for repeated combinations of random ma- trices. The results that we present are implemented generating subsets, permutations, and equivalence relations. We developped a Matlab routine code in order to perform convolution or deconvolution numerically in terms of a set of input moments. We apply this inference framework to the study of cognitive networks, as well as to the study of wireless networks with high mobility. We analyze the asymptotic moments of random Vandermonde matrices with entries on the unit circle. We use them and polynomial expansion detectors in order to design a low complexity linear MMSE decoder to recover the signal transmitted by mobile users to a base station or two base stations, represented by uniform linear arrays.

Page generated in 0.0286 seconds