• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 492
  • 228
  • 163
  • 44
  • 43
  • 28
  • 17
  • 9
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1210
  • 311
  • 121
  • 112
  • 106
  • 82
  • 81
  • 75
  • 75
  • 73
  • 54
  • 50
  • 47
  • 47
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Méthode non-additive intervalliste de super-résolution d'images, dans un contexte semi-aveugle / A non-additive interval-valued super-resolution image method, in a semi-blind context

Graba, Farès 17 April 2015 (has links)
La super-résolution est une technique de traitement d'images qui consiste en la reconstruction d'une image hautement résolue à partir d'une ou plusieurs images bassement résolues.Cette technique est apparue dans les années 1980 pour tenter d'augmenter artificiellement la résolution des images et donc de pallier, de façon algorithmique, les limites physiques des capteurs d'images.Comme beaucoup des techniques de reconstruction en traitement d'images, la super-résolution est connue pour être un problème mal posé dont la résolution numérique est mal conditionnée. Ce mauvais conditionnement rend la qualité des images hautement résolues reconstruites très sensible au choix du modèle d'acquisition des images, et particulièrement à la modélisation de la réponse impulsionnelle de l'imageur.Dans le panorama des méthodes de super-résolution que nous dressons, nous montrons qu'aucune des méthodes proposées par la littérature ne permet de modéliser proprement le fait que la réponse impulsionnelle d'un imageur est, au mieux, connue de façon imprécise. Au mieux l'écart existant entre modèle et réalité est modélisé par une variable aléatoire, alors que ce biais est systématique.Nous proposons de modéliser l'imprécision de la connaissance de la réponse impulsionnelle par un ensemble convexe de réponses impulsionnelles. L'utilisation d'un tel modèle remet en question les techniques de résolution. Nous proposons d'adapter une des techniques classiques les plus populaires, connue sous le nom de rétro-projection itérative, à cette représentation imprécise.L'image super-résolue reconstruite est de nature intervalliste, c'est à dire que la valeur associée à chaque pixel est un intervalle réel. Cette reconstruction s'avère robuste à la modélisation de la réponse impulsionnelle ainsi qu'à d'autres défauts. Il s'avère aussi que la largeur des intervalles obtenus permet de quantifier l'erreur de reconstruction. / Super-resolution is an image processing technique that involves reconstructing a high resolution image based on one or several low resolution images. This technique appeared in the 1980's in an attempt to artificially increase image resolution and therefore to overcome, algorithmically, the physical limits of an imager.Like many reconstruction problems in image processing, super-resolution is known as an ill-posed problem whose numerical resolution is ill-conditioned. This ill-conditioning makes high resolution image reconstruction qualityvery sensitive to the choice of image acquisition model, particularly to the model of the imager Point Spread Function (PSF).In the panorama of super-resolution methods that we draw, we show that none of the methods proposed in the relevant literature allows properly modeling the fact that the imager PSF is, at best, imprecisely known. At best the deviation between model and reality is considered as being a random variable, while it is not: the bias is systematic.We propose to model scant knowledge on the imager's PSF by a convex set of PSFs. The use of such a model challenges the classical inversion methods. We propose to adapt one of the most popular super-resolution methods, known under the name of "iterative back-projection", to this imprecise representation. The super-resolved image reconstructed by the proposed method is interval-valued, i.e. the value associated to each pixel is a real interval. This reconstruction turns out to be robust to the PSF model and to some other errors. It also turns out that the width of the obtained intervals quantifies the reconstruction error.
292

\"Super-Heróis da Ebal - A publicação nacional dos personagens dos \'comic books\' dos EUA pela Editora Brasil-América (EBAL), década de 1960 e 70\" / EBAL SUPER-HEROES: the publication in Brazil of American comic books superheroes by Editora Brasil-America (EBAL) from the mid-60\'s to the mid-70\'s

Flexa, Rodrigo Nathaniel Arco e 20 June 2006 (has links)
Estudo sobre a publicação nacional dos super-heróis dos comic books dos EUA pela Editora Brasil-América (EBAL) entre meados dos anos 60 e 70. Para tanto, será traçado um panorama histórico, acrescido das coordenadas teóricas que norteiam a análise da iconografia dessas revistas. Houve uma extensa pesquisa iconográfica focalizada nas edições da EBAL, além de levantamento de histórias em quadrinhos de outras editoras e épocas. Mesmo sendo um produto típico da indústria cultural do século 20, as histórias em quadrinhos apresentam inflexões que permitem relacionar arte, cultura, sociedade e imaginário. O estudo inclui ainda entrevistas com leitores da EBAL. / Study on the publication in Brazil of American comic books superheroes by Editora Brasil-America (EBAL) from the mid-60\'s to the mid-70\'s. With such an aim, work traces an historical panorama to which shall be added the theoretical coordinates guiding the analysis of these magazines iconography. Extensive iconographic research focused in EBAL?s publications has been made, besides a survey of comics from other publishing houses and from other decades as well. Even being a typical product of 20th century cultural industry, comics show contradictions allowing to establish relations between art, culture, society and the imaginary. Study also includes interviews with frequent readers of EBAL publications.
293

Mandible and Skull Segmentation in Cone Bean Computed Tomography Data / Segmentação da mandíbula e o crânio em tomografia computadorizada de feixe cônico

Oscar Alonso Cuadros Linares 18 December 2017 (has links)
Cone Beam Computed Tomography (CBCT) is a medical imaging technique routinely employed for diagnosis and treatment of patients with cranio-maxillo-facial defects. CBCT 3D reconstruction and segmentation of bones such as mandible or maxilla are essential procedures in orthodontic treatments. However, CBCT images present characteristics that are not desirable for processing, including low contrast, inhomogeneity, noise, and artifacts. Besides, values assigned to voxels are relative Hounsfield Units (HU), unlike traditional Computed Tomography (CT). Such drawbacks render CBCT segmentation a difficult and time-consuming task, usually performed manually with tools designed for medical image processing. We introduce two interactive two-stage methods for 3D segmentation of CBCT data: i) we first reduce the CBCT image resolution by grouping similar voxels into super-voxels defining a graph representation; ii) next, seeds placed by users guide graph clustering algorithms, splitting the bones into mandible and skull. We have evaluated our segmentation methods intensively by comparing the results against ground truth data of the mandible and the skull, in various scenarios. Results show that our methods produce accurate segmentation and are robust to changes in parameter settings. We also compared our approach with a similar segmentation strategy and we showed that it produces more accurate segmentation of the mandible and skull. In addition, we have evaluated our proposal with CT data of patients with deformed or missing bones. We obtained more accurate segmentation in all cases. As for the efficiency of our implementation, a segmentation of a typical CBCT image of the human head takes about five minutes. Finally, we carried out a usability test with orthodontists. Results have shown that our proposal not only produces accurate segmentation, as it also delivers an effortless and intuitive user interaction. / Tomografia Computadorizada de Feixe Cônico (TCFC) é uma modalidade para obtenção de imagens médicas 3D do crânio usada para diagnóstico e tratamento de pacientes com defeitos crânio-maxilo-faciais. A segmentação tridimensional de ossos como a mandíbula e a maxila são procedimentos essências em tratamentos ortodônticos. No entanto, a TCFC apresenta características não desejáveis para processamento digital como, por exemplo, baixo contraste, inomogeneidade, ruído e artefatos. Além disso, os valores atribuídos aos voxels são unidades de Hounsfield (HU) relativas, diferentemente da Tomografia Computadorizada (TC) tradicional. Esses inconvenientes tornam a segmentação de TCFC uma tarefa difícil e demorada, a qual é normalmente realizada por meio de ferramentas desenvolvidas para processamento digital de imagens médicas. Esta tese introduz dois métodos interativos para a segmentação 3D de TCFC, os quais são divididos em duas etapas: i) redução da resolução da TCFC por meio da agrupamento de voxels em super-voxels, seguida da criação de um grafo no qual os vértices são super-voxels; ii) posicionamento de sementes pelo usuário e segmentação por algoritmos de agrupamento em grafos, o que permite separar os ossos rotulados. Os métodos foram intensamente avaliados por meio da comparação dos resultados com padrão ouro da mandíbula e do crânio, considerando diversos cenários. Os resultados mostraram que os métodos não apenas produzem segmentações precisas, como também são robustos a mudanças nos parâmetros. Foi ainda realizada uma comparação com um trabalho relacionado, gerando melhores resultados tanto na segmentação da mandíbula quanto a do crânio. Além disso, foram avaliadas TCs de pacientes com ossos faltantes e quebrados. A segmentação de uma TCFC é realizada em cerca de 5 minutos. Por fim, foram realizados testes com usuarios ortodontistas. Os resultados mostraram que nossa proposta não apenas produz segmentações precisas, como também é de fácil interação.
294

Influência da formação estelar versus buracos negros de nucleos ativos de galaxias (AGN) na evolução de ventos galácticos / Star Formation versus Active Galactic Nuclei (AGN) Black Hole feedback in the Evolution of Galaxy Outflows

Bohórquez, William Eduardo Clavijo 10 August 2018 (has links)
Ventos (em inglês outflows) de ampla abertura e larga escala sâo uma característica comum em galáxias ativas, como as galáxias Seyfert. Em sistemas como este, onde buracos negros supermassivos (em inglês super massive black holes, SMBHs) de núcleos galácticos ativos de galáxias (em inglês active galactic nuclei, AGN) coexistem com regiões de formação estelar (em inglês star forming, SF), nâo está claro das observações se o AGN SMBH ou o SF (ou ambos) são responsaveis pela indução desses ventos. Neste trabalho, estudamos como ambos podem influenciar a evolução da galáxia hospedeira e seus outflows, considerando galáxias tipo Seyfert nas escalas de kilo-parsec (kpc). Para este objetivo, estendemos o trabalho anterior desenvolvido por Melioli & de Gouveia Dal Pino (2015), que considerou ventos puramente hidrodinâmicos impulsionados tanto pela SF quanto pelo AGN, mas levando em conta para este último apenas ventos bem estreitos (colimados). A fim de obter uma melhor compreensão da influencia (feedback) desses mecanismos sobre a evolução da galáxia e seus outflows, incluímos também os efeitos de ventos de AGN com maior ângulo de abertura, já que ventos em forma de cone podem melhorar a interação com o meio interestelar da galáxia e assim, empurrar mais gás nos outflows. Além disso, incluímos também os efeitos dos campos magnéticos no vento, já que estes podem, potencialmente, ajudar a preservar as estruturas e acelerar os outflows. Realizamos simulações tridimensionais magneto-hidrodinâmicas (MHD) considerando o resfriamento radiativo em equilíbrio de ionização e os efeitos dos ventos do AGN com dois diferentes ângulos de abertura (0º e 10º) e razões entre a pressão térmica e a pressão magnética beta=infinito, = 300 e 30, correspondentes a campos magnéticos 0, 0,76 micro-Gauss e 2,4 micro-Gauss respectivamente. Os resultados de nossas simulações mostram que os ventos impulsionados pelos produtos de SF (isto é, pelas explosões de supernovas, SNe) podem direcionar ventos com velocidades 100-1000 km s¹, taxas de perda de massa da ordem de 50 Massas solares/ano, densidades de ~1-10 cm-3 e temperaturas entre 10 e 10 K, que se assemelham às propriedades dos denominados absorvedores de calor (em inglês warm absorbers, WAs) e também são compatíveis com as velocidades dos outflows moleculares observadas. No entanto, as densidades obtidas nas simulações são muito pequenas e as temperaturas são muito grandes para explicar os valores observados nos outflows moleculares (que têm n ~150-300 cm³ e T<1000 K). Ventos colimados de AGN (sem a presença de ventos SF) também são incapazes de conduzir outflows, mas podem acelerar estruturas a velocidades muito altas, da ordem de ~10.000 km s¹ e temperaturas T> 10 K, tal como observado em ventos ultra rapidos (em inglês, ultra-fast outflows, UFOs). A introdução do vento de AGN, particularmente com um grande ângulo de abertura, causa a formação de estruturas semelhantes a fontes galácticas. Isso faz com que parte do gás em expansão (que está sendo empurrado pelo vento de SF) retorne para a galáxia, produzindo um feedback \'positivo\' na evolução da galáxia hospedeira. Descobrimos que esses efeitos são mais pronunciados na presença de campos magnéticos, devido à ação de forças magnéticas extras pelo vento AGN, o qual intensifica o efeito de retorno do gás (fallback), e ao mesmo tempo reduz a taxa de perda de massa nos outflows por fatores de até 10. Além disso, a presença de um vento de AGN colimado (0º) causa uma remoção significativa da massa do núcleo da galáxia em poucos 100.000 anos, mas este é logo reabastecido pelo de gás acretante proveniente do meio interestelar (ISM) à medida que as explosões de SNe se sucedem. Por outro lado, um vento de AGN com um grande ângulo de abertura, em presença de campos magnéticos, remove o gás nuclear inteiramente em alguns 100.000 anos e não permite o reabastecimento posterior pelo ISM. Portanto, extingue a acreção de combustível e de massa no SMBH. Isso indica que o ciclo de trabalho desses outflows é de cerca de alguns 100.000 anos, compatível com as escalas de tempo inferidas para os UFOs e outflows moleculares observados. Em resumo, os modelos que incluem ventos de AGN com um ângulo de abertura maior e campos magnéticos, levam a velocidades médias muito maiores que os modelos sem vento de AGN, e também permitem que mais gás seja acelerado para velocidades máximas em torno de ~10 km s¹, com densidades e temperaturas compatíveis com aquelas observadas em UFOs. No entanto, as estruturas com velocidades intermediárias de vários ~100 km s¹ e densidades até uns poucos 100 cm³, que de fato poderiam reproduzir os outflows moleculares observados, têm temperaturas que são muito grandes para explicar as características observadas nos outflows moleculares, que tem temperaturas T< 1000 K. Além disso, estes ventos de AGN não colimados em presença de campos magnéticos entre T< 1000 K. Alem disso, estes grandes ventos AGN de angulo de abertura em fluxos magnetizados reduzem as taxas de perda de massa dos outflows para valores menores que aqueles observados tanto em outflows moleculares quanto em UFOs. Em trabalhos futuros, pretendemos estender o espaço paramétrico aqui investigado e também incluir novos ingredientes em nossos modelos, como o resfriamento radioativo fora do equilíbrio, a fim de tentar reproduzir as características acima que não foram explicadas pelo modelo atual. / Large-scale broad outflows are a common feature in active galaxies, like Seyfert galaxies. In systems like this, where supermassive black hole (SMBH) active galactic nuclei (AGN) coexist with star-forming (SF) regions it is unclear from the observations if the SMBH AGN or the SF (or both) are driving these outflows. In this work, we have studied how both may influence the evolution of the host galaxy and its outflows, considering Seyfert-like galaxies at kilo-parsec (kpc) scales. For this aim, we have extended previous work developed by Melioli & de Gouveia Dal Pino (2015), who considered purely hydrodynamical outflows driven by both SF and AGN, but considering for the latter only very narrow (collimated) winds. In order to achieve a better understanding of the feedback of these mechanisms on the galaxy evolution and its outflows, here we have included the effects of AGN winds with a larger opening angle too, since conic-shaped winds can improve the interaction with the interstellar medium of the galaxy and thus push more gas into the outflows. Besides, we have also included the effects of magnetic fields in the flow, since these can potentially help to preserve the structures and speed up the outflows. We have performed three-dimensional magneto-hydrodynamical (MHD) simulations considering equilibrium radiative cooling and the effects of AGN-winds with two different opening angles (0º and 10º), and thermal pressure to magnetic pressure ratios of beta=infinite, 300 and 30 corresponding to magnetic fields 0, 0.76 micro-Gauss and 2.4 micro-Gauss, respectively. The results of our simulations show that the winds driven by the products of SF (i.e., by explosions of supernovae, SNe) alone can drive outflows with velocities ~100-1000 km s¹, mass outflow rates of the order of 50 Solar Masses yr¹, densities of ~1-10 cm³, and temperatures between 10 and 10 K, which resemble the properties of warm absorbers (WAs) and are also compatible with the velocities of the observed molecular outflows. However, the obtained densities from the simulations are too small and the temperatures too large to explain the observed values in molecular outflows (which have n ~ 150-300 cm³ and T<1000 K). Collimated AGN winds alone (without the presence of SF-winds) are also unable to drive hese outflows, but they can accelerate structures to very high speeds, of the order of ~ 10.000 km s¹, and temperatures T> 10 K as observed in ultra-fast outflows (UFOs). The introduction of an AGN wind, particularly with a large opening angle, causes the formation of fountain-like structures. This makes part of the expanding gas (pushed by the SF-wind) to fallback into the galaxy producing a \'positive\' feedback on the host galaxy evolution. We have found that these effects are more pronounced in presence of magnetic fields, due to the action of extra magnetic forces by the AGN wind producing enhanced fallback that reduces the mass loss rate in the outflows by factors up to 10. Furthermore, the presence of a collimated AGN wind (0º) causes a significant removal of mass from the core region in a few 100.000 yr, but this is soon replenished by gas inflow from the interstellar medium (ISM) when the SNe explosions fully develop. On the other hand, an AGN wind with a large opening angle in presence of magnetic fields is able to remove the nuclear gas entirely within a few 100.000 yr and does not allow for later replenishment. Therefore, it quenches the fueling and mass accretion onto the SMBH. This indicates that the duty cycle of these outflows is around a few 100.000 yr, compatible with the time-scales inferred for the observed UFOs and molecular outflows. In summary, models that include AGN winds with a larger opening angle and magnetic fields, lead to to be accelerated to maximum velocities around 10 km s¹ (than models with collimated AGN winds), with densities and temperatures which are compatible with those observed in UFOs. However, the structures with intermediate velocities of several ~100 km s¹ and densities up to a few 100 cm3, that in fact could reproduce the observed molecular outflows, have temperatures which are too large to explain the observed molecular features, which have temperatures T<1000 K. Besides, these large opening angle AGN winds in magnetized flows reduce the mass loss rates of the outflows to values smaller than those observed both in molecular outflows and UFOs. In future work, we intend to extend the parametric space here investigated and also include new ingredients in our models, such as non-equilibrium radiative cooling, in order to try to reproduce the features above that were not explained by the current model.
295

Estudo da formação e reversão de martensita induzida por deformação na austenita de dois aços inoxidáveis dúplex. / The study of formation and reversion of the strain induced alpha-prime martensite in duplex and super duplex stainless steels

Aguiar, Denilson José Marcolino de 17 August 2012 (has links)
No presente trabalho foram estudados os fenômenos de encruamento e, principalmente, a formação e reversão da martensita alfa-linha (a\', cúbica de corpo centrado, CCC, ferromagnética) induzida por deformação em um aço inoxidável dúplex UNS S31803 e um super dúplex UNS S32520. Inicialmente, as microestruturas dos dois materiais na condição solubilizada foram caracterizadas com auxílio de várias técnicas complementares de análise microestrutural. Foram determinadas fração volumétrica, estrutura cristalina, composição química, tamanho e morfologia das duas fases (ferrita e austenita). Posteriormente, os dois aços foram deformados por dois métodos: a laminação a frio, dividida em vários estágios, com menores graus de deformação e a limagem, sendo que o cavaco limado resultante apresenta altos graus de deformação. Algumas amostras deformadas foram recozidas. Os fenômenos de encruamento, formação e reversão de martensita induzida por deformação na austenita, recuperação, recristalização da austenita e da ferrita no cavaco limado foram estudados predominantemente por difratometria de raios X e usando o método de Rietveld. A difratometria de raios X também foi utilizada para determinação das microdeformações residuais e tamanhos de cristalito (subgrão), calculadas a partir do alargamento dos picos de difração causado pelas deformações. Desta forma, puderam-se comparar os níveis de deformação da laminação e limagem. Qualitativamente, a formação e reversão da martensita induzida por deformação também foi estudada por meio de medidas magnéticas utilizando-se dados de saturação magnética das curvas de histerese obtidas com o auxílio de um magnetômetro de amostra vibrante. Observou-se que para o aço inoxidável dúplex, tanto a laminação quanto a limagem causaram a formação de martensita induzida por deformação e para o aço inoxidável super dúplex, apenas a limagem promoveu essa transformação. Em comparação com o aço dúplex, o aço super dúplex apresentou maior resistência à formação de martensita induzida por deformação, pois apresenta uma austenita mais rica em nitrogênio e uma maior propensão à formação de fase sigma durante o recozimento, pois apresenta uma ferrita mais rica em cromo e nitrogênio. / In the present work the phenomena of strain hardening, formation and reversion of the strain induced alpha-prime martensite (a\', body centered cubic, BCC, Ferromagnetic) in an UNS S31803 duplex and UNS S32520 super duplex stainless steels have been studied. Firstly, the microstructures of both materials in the solution annealed condition were characterized with the aid of several microstructural analysis complementary techniques. The volume fraction, crystalline structure, chemical composition, size and morphology of the two phases (ferrite and austenite) have been determined. Further, both steels were deformed by two methods: cold rolling, divided into several stages, with lower strain levels than filing, which the chips resulting had higher strain levels. The phenomena of strain hardening, formation and reversion of strain induced martensite in the austenite phase, recovery and recrystallization of austenite and ferrite phases have been studied, mainly using X-ray diffraction and the Rietveld method. X-ray diffraction was also used to determine the residual microstrain and crystallite size (sub grain), calculated from the diffraction peak broadening caused by straining. Thus, the levels of cold rolling and filing strains could be compared. Qualitatively, the formation and reversion of strain induced martensite was also studied by magnetic measurements using data from magnetic saturation of hysteresis curves obtained with the aid of a vibrating sample magnetometer. It has been observed that for the duplex stainless steel, both filing as well as cold rolling promoted strain induced martensite. On the other hand, for the super duplex stainless steel, just filing promoted this transformation. In the comparing with duplex, the super duplex stainless steel austenite is more stable that is why is richer in nitrogen, so, the strain induced martensite formation is more difficult. The easier sigma phase precipitation during annealing as well in the super duplex stainless steel is due higher levels of chrome and molybdenum than the duplex stainless steel.
296

Descrição de medidas em sistemas de 2 níveis pela equação de Lindblad com inclusão de ambiente / Analysis of the environmental influence on the measurement process of a 2-level system using the Lindblad equation

Brasil, Carlos Alexandre 23 February 2012 (has links)
O objetivo deste trabalho é explorar um modelo para medidas quânticas de duração finita baseado na equação de Lindblad, com a análise de um sistema de 2 níveis acoplado a um reservatório térmico que ocasiona decoerência. A interação entre o sistema e o dispositivo de medida é markoviana, justificando o uso da equação de Lindblad para obter a dinâmica do processo de medida. Para analisar a influência do ambiente/reservatório térmico não-markoviano, cuja definição não inclui o aparato de medida, foi utilizada a abordagem de Redfield para a interação entre o sistema e o ambiente. Na teoria híbrida aqui exposta, para efetuar o traço parcial dos graus de liberdade do ambiente foi desenvolvido um método analítico baseado na álgebra de super-operadores e no uso dos super-operadores de Nakajima-Zwanzig. Foi verificado que medidas de duração finita sobre o sistema aberto de 2 níveis podem proteger o estado inicial dos efeitos do ambiente, desde que o observável medido não comute com a interação. Quando o observável medido comuta com a interação sistema-ambiente, a medida de duração finita acelera a decoerência induzida pelo ambiente. A validade das previsões analíticas foi testada comparando os resultados com uma abordagem numérica exata. Quando o acoplamento entre o sistema e o aparato de medida excede a faixa de validade da aproximação analítica, o estado inicial ainda é protegido pela medida de duração finita, como indicam os cálculos numéricos exatos. / The aim of this work is to explore a model for finite-time measurement based on the Lindblad equation, with analysis of a system consisting of a 2-level system coupled to a thermal reservoir. We assume a Markovian measuring device and, therefore, use a Lindbladian description for the measurement dynamics. For studying the case of noise produced by a non-Markovian environment, whose definition does not include the measuring apparatus, we use the Redfield approach to the interaction between system and environment. In the present hybrid theory, to trace out the environmental degrees of freedom, we introduce an analytic method based on superoperator algebra and Nakajima-Zwanzig superoperators. We show that measurements of finite duration performed on an open two-state system can protect the initial state from a phase-noisy environment, provided the measured observable does not commute with the perturbing interaction. When the measured observable commutes with the environmental interaction, the finite-duration measurement accelerates the rate of decoherence induced by the phase noise. We have tested the validity of the analytical predictions against an exact numerical approach. When the coupling between the system and the measuring apparatus increases beyond the range of validity of the analytical approximation, the initial state is still protected by the finite-time measurement, according with the exact numerical calculations.
297

Quantitative molecular orientation imaging of biological structures by polarized super-resolution fluorescence microscopy / Imagerie quantitative d'orientation moléculaire dans les structures biologiques par microscopiesuper-résolution polarisée

Ahmed, Haitham Ahmed Shaban 02 April 2015 (has links)
Dans cette thèse, nous avons construit et optimisé des méthodes de microscopie de fluorescence super-résolue stochastique, polarisée et quantitative qui nous permettent d'imager l'orientation moléculaire dans des environnements dynamiques et statiques a l’échelle de la molécule unique et avec une résolution nanoscopique. En utilisant un montage de microscopie super-résolue à lecture stochastique en combinaison avec une détection polarisée, nous avons pu reconstruire des images d'anisotropie de fluorescence avec une résolution spatiale de 40 nm. En particulier, nous avons pu imager l'ordre orientationnel d'assemblages biomoléculaires et cellulaires. Pour l'imagerie cellulaire, nous avons pu étudier la capacité d'étiquettes de marquer fluorophoresde reporter quantifier l'orientation moléculaire dans l'actine et les microtubules dans des cellules fixées. Nous avons également mis à profit la meilleure résolution et la détection polarisée pour étudier l'ordre moléculaire d’agrégats d’amyloïdes a l’échelle nanoscopique. Enfin, nous avons étudié l'interaction de la protéine de réparation RAD51 avec l'ADN par microscopie de fluorescence polarisée super-résolue pour quantifier l'ordre orientationnel de l'ADN et de la protéine RAD51 afin de comprendre la recombinaison homologue du mécanisme de réparation de l'ADN. / .In this thesis we built and optimized quantitative polarized stochastic super-resolution fluorescence microscopy techniques that enabled us to image molecular orientation behaviors in static and dynamic environments at single molecule level and with nano-scale resolution. Using a scheme of stochastic read-out super resolution microscopy in combination with polarized detection, we can reconstruct fluorescence anisotropy images at a spatial resolution of 40 nm. In particular, we have been able to use the techniques to quantify the molecular orientationalorder in cellular and bio-molecular assemblies. For cellular imaging, we could quantify the ability of fluorophore labels to report molecular orientation of actin and microtubules in fixed cells. Furthermore, we used the improvements of resolution and polarization detection to study molecular order of amyloid aggregates at a nanoscopic scale. Also, we studied repair protein RAD51` s interaction with DNA by using dual color polarized fluorescence microscopy, to quantify the orientational order of DNA and RAD51 to understand the homologous recombination of DNA repair mechanism.
298

\"Super-Heróis da Ebal - A publicação nacional dos personagens dos \'comic books\' dos EUA pela Editora Brasil-América (EBAL), década de 1960 e 70\" / EBAL SUPER-HEROES: the publication in Brazil of American comic books superheroes by Editora Brasil-America (EBAL) from the mid-60\'s to the mid-70\'s

Rodrigo Nathaniel Arco e Flexa 20 June 2006 (has links)
Estudo sobre a publicação nacional dos super-heróis dos comic books dos EUA pela Editora Brasil-América (EBAL) entre meados dos anos 60 e 70. Para tanto, será traçado um panorama histórico, acrescido das coordenadas teóricas que norteiam a análise da iconografia dessas revistas. Houve uma extensa pesquisa iconográfica focalizada nas edições da EBAL, além de levantamento de histórias em quadrinhos de outras editoras e épocas. Mesmo sendo um produto típico da indústria cultural do século 20, as histórias em quadrinhos apresentam inflexões que permitem relacionar arte, cultura, sociedade e imaginário. O estudo inclui ainda entrevistas com leitores da EBAL. / Study on the publication in Brazil of American comic books superheroes by Editora Brasil-America (EBAL) from the mid-60\'s to the mid-70\'s. With such an aim, work traces an historical panorama to which shall be added the theoretical coordinates guiding the analysis of these magazines iconography. Extensive iconographic research focused in EBAL?s publications has been made, besides a survey of comics from other publishing houses and from other decades as well. Even being a typical product of 20th century cultural industry, comics show contradictions allowing to establish relations between art, culture, society and the imaginary. Study also includes interviews with frequent readers of EBAL publications.
299

Stability of Transfermium Elements at High Spin : Measuring the Fission Barrier of 254No / Stablité des Eléments Trans-ferminums à Haut Spin : Mesure de la barrière de fission de 254No

Henning, Gregoire 20 September 2012 (has links)
Les noyaux super lourds offrent la possibilité d’étudier la structure nucléaire à trois limites simultanément: en charge Z, spin I et énergie d’excitation E∗. Ces noyaux existent seulement grâce à une barrière de fission créée par les effets de couche. Il est donc important de déterminer cette barrière de fission et sa dépendance en spin Bf(I), qui nous renseigne sur l’énergie de couche Eshell(I). Les théories prédisent des valeurs différentes pour la hauteur de la barrière de fission, allant de Bf(I = 0) = 6.8 MeV dans un modèle macro-microscopique à 8.7 MeV pour des calculs de la théorie de la fonctionnelle de la densité utilisant l’interaction Gogny ou Skyrme. Une mesure de Bf fournit donc un test des théories.Pour étudier la barrière de fission, la méthode établie est de mesurer, par réaction de transfert, l’augmentation de la fission avec l’énergie d’excitation, caractérisée par le rapport des largeurs de décroissance Γfission/Γtotal,. Cependant, pour les éléments lourds comme 254No, il n’existe pas de cible appropriée pour une réaction de transfert. Il faut s’en remettre à un rapport de largeur de décroissance complémentaire: Γγ/Γfission et sa dépendance en spin, déduite de la distribution d’entrée (I, E∗).Des mesures de la multiplicité et l’énergie totale des rayons γ de254No ont été faites aux énergies de faisceau 219 et 223 MeV pour la réaction 208Pb(48Ca,2n) à ATLAS (Argonne Tandem Linac Accelerator System). Les rayons γ du 254No ont été détectés par le multi-détecteur Gammasphere utilisé comme calorimètre – et aussi comme détecteur de rayons γ de haute résolution. Les coïncidences avec les résidus d’évaporation au plan focal du Fragment Mass Analyzer ont permis de séparer les rayons γ du 254No de ceux issus de la fission, qui sont > 10^6 fois plus intenses. De ces mesures, la distribution d’entrée – c’est-à-dire la distribution initiale en I et E∗ – est reconstruite. Chaque point (I,E∗) de la distribution d’entrée est un point où la décroissance γ l’a emporté sur la fission, et ainsi, contient une information sur la barrière de fission.La distribution d’entrée mesurée montre une augmentation du spin maximal et de l’énergie d’excitation entre les énergies de faisceau 219 et 223 MeV. La distribution présente une saturation de E∗ à hauts spins. Cette saturation est attribuée au fait que, lorsque E∗ augmente au-dessus de la barrière, Γfission domine rapidement. Il en résulte une troncation de la distribution d’entrée à haute énergie qui permet la détermination de la hauteur de la barrière de fission.La mesure expérimentale de la distribution d’entrée est également comparée avec des distributions d’entrée calculées par des simulations de cascades de décroissance qui prennent en compte le processus de formation du noyau, incluant la capture et la survie, en fonction de E∗ et I. Dans ce travail, nous avons utilisé les codes KEWPIE2 et NRV pour simuler les distributions d’entrée. / Super heavy nuclei provide opportunities to study nuclear structure near three simultaneous limits: in charge Z, spin I and excitation energy E∗. These nuclei exist only because of a fission barrier, created by shell effects. It is therefore important to determine the fission barrier and its spin dependence Bf(I), which gives information on the shell energy Eshell(I). Theoretical calculations predict different fission barrier heights from Bf(I = 0) = 6.8 MeV for a macro-microscopic model to 8.7 MeV for Density Functional Theory calculations using the Gogny or Skyrme interactions. Hence, a measurement of Bf provides a test for theories.To investigate the fission barrier, an established method is to measure the rise of fission with excitation energy, characterized by the ratio of decay widths Γfission/Γtotal, using transfer reactions. However, for heavy elements such as 254No, there is no suitable target for a transfer reaction. We therefore rely on the complementary decay widths ratio Γγ/Γfission and its spin dependence, deduced from the entry distribution (I, E∗).Measurements of the gamma-ray multiplicity and total energy for 254No have been performed with beam energies of 219 and 223 MeV in the reaction 208Pb(48Ca,2n) at ATLAS (Argonne Tandem Linac Accelerator System). The 254No gamma rays were detected using the Gammasphere array as a calorimeter – as well as the usual high resolution γ-ray detector. Coincidences with evaporation residues at the Fragment Mass Analyzer focal plane separated 254No gamma rays from those from fission fragments, which are > 10^6 more intense. From this measurement, the entry distribution – i.e. the initial distribution of I and E∗ – is constructed. Each point (I,E∗) of the entry distribution is a point where gamma decay wins over fission and, therefore, gives information on the fission barrier.The measured entry distributions show an increase in the maximum spin and excitation energy from 219 to 223 MeV of beam energy. The distributions show a saturation of E∗ for high spins. The saturation is attributed to the fact that, as E∗ increases above the saddle, Γfission rapidly dominates. The resulting truncation of the entry distribution at high E∗ allows a determination of the fission barrier height.The experimental entry distributions are also compared with entry distributions calculated with decay cascade codes which take into account the full nucleus formation process, including the capture process and the subsequent survival probability as a function of E∗ and I. We used the KEWPIE2 and NRV codes to simulate the entry distribution.
300

Statistical and numerical optimization for speckle blind structured illumination microscopy / Optimisation numérique et statistique pour la microscopie à éclairement structuré non contrôlé

Liu, Penghuan 25 May 2018 (has links)
La microscopie à éclairements structurés(structured illumination microscopy, SIM) permet de dépasser la limite de résolution en microscopie optique due à la diffraction, en éclairant l’objet avec un ensemble de motifs périodiques parfaitement connus. Cependant, il s’avère difficile de contrôler exactement la forme des motifs éclairants. Qui plus est, de fortes distorsions de la grille de lumière peuvent être générées par l’échantillon lui-même dans le volume d’étude, ce qui peut provoquer de forts artefacts dans les images reconstruites. Récemment, des approches dites blind-SIM ont été proposées, où les images sont acquises à partir de motifs d’éclairement inconnus, non-périodiques, de type speckle,bien plus faciles à générer en pratique. Le pouvoir de super résolution de ces méthodes a été observé, sans forcément être bien compris théoriquement. Cette thèse présente deux nouvelles méthodes de reconstruction en microscopie à éclairements structurés inconnus (blind speckle-SIM) : une approche conjointe et une approche marginale. Dans l’approche conjointe, nous estimons conjointement l’objet et les motifs d’éclairement au moyen d’un modèle de type Basis Pursuit DeNoising (BPDN) avec une régularisation en norme lp,q où p=>1 et 0<q<=1. La norme lp,q est introduite afin de prendre en compte une hypothèse de parcimonie sur l’objet. Dans l’approche marginale, nous reconstruisons uniquement l’objet et les motifs d’éclairement sont traités comme des paramètres de nuisance. Notre contribution est double. Premièrement, une analyse théorique démontre que l’exploitation des statistiques d’ordre deux des données permet d’accéder à un facteur de super résolution de deux, lorsque le support de la densité spectrale du speckle correspond au support fréquentiel de la fonction de transfert du microscope. Ensuite, nous abordons le problème du calcul numérique de la solution. Afin de réduire à la fois le coût de calcul et les ressources en mémoire, nous proposons un estimateur marginal à base de patches. L’élément clé de cette méthode à patches est de négliger l’information de corrélation entre les pixels appartenant à différents patches. Des résultats de simulations et en application à des données réelles démontrent la capacité de super résolution de nos méthodes. De plus, celles-ci peuvent être appliquées aussi bien sur des problèmes de reconstruction 2D d’échantillons fins, mais également sur des problèmes d’imagerie 3D d’objets plus épais. / Conventional structured illumination microscopy (SIM) can surpass the resolution limit inoptical microscopy caused by the diffraction effect, through illuminating the object with a set of perfectly known harmonic patterns. However, controlling the illumination patterns is a difficult task. Even worse, strongdistortions of the light grid can be induced by the sample within the investigated volume, which may give rise to strong artifacts in SIM reconstructed images. Recently, blind-SIM strategies were proposed, whereimages are acquired through unknown, non-harmonic,speckle illumination patterns, which are much easier to generate in practice. The super-resolution capacity of such approaches was observed, although it was not well understood theoretically. This thesis presents two new reconstruction methods in SIM using unknown speckle patterns (blind-speckle-SIM): one joint reconstruction approach and one marginal reconstruction approach. In the joint reconstruction approach, we estimate the object and the speckle patterns together by considering a basis pursuit denoising (BPDN) model with lp,q-norm regularization, with p=>1 and 0<q<=1. The lp,q-norm is introduced based on the sparsity assumption of the object. In the marginal approach, we only reconstruct the object, while the unknown speckle patterns are considered as nuisance parameters. Our contribution is two fold. First, a theoretical analysis demonstrates that using the second order statistics of the data, blind-speckle-SIM yields a super-resolution factor of two, provided that the support of the speckle spectral density equals the frequency support of the microscope point spread function. Then, numerical implementation is addressed. In order to reduce the computational burden and the memory requirement of the marginal approach, a patch-based marginal estimator is proposed. The key idea behind the patch-based estimator consists of neglecting the correlation information between pixels from different patches. Simulation results and experiments with real data demonstrate the super-resolution capacity of our methods. Moreover, our proposed methods can not only be applied in 2D super-resolution problems with thin samples, but are also compatible with 3D imaging problems of thick samples.

Page generated in 0.0464 seconds