• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 5
  • 1
  • Tagged with
  • 19
  • 19
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Temporal relations in English and German narrative discourse

Schilder, Frank January 1997 (has links)
Understanding the temporal relations which hold between situations described in a narrative is a highly complex process. The main aim of this thesis is to investigate the factors we have to take into account in order to determine the temporal coherence of a narrative discourse. In particular, aspectual information, tense, and world and context knowledge have to be considered and the interplay of all these factors must be specified. German is aspectually speaking an interesting language, because it does not possess a grammaticalised distinction between a perfective and imperfective aspect. In this thesis I examine the German aspectual system and the interaction of the factors which have an influence on the derived temporal relation for short discourse sequences. The analysis is carried out in two steps: First, the aspectual and temporal properties of German are investigated, following the cross-linguistic framework developed by Carlota S. Smith. An account for German is given which emphasises the properties which are peculiar to this language and explains why it has to be treated differently to, for example, English. The main result for the tense used in a narrative text—the Preterite—is that information regarding the end point of a described situation is based on our world knowledge and may be overridden provided context knowledge forces us to do this. Next, the more complex level of discourse is taken into account in order to derive the temporal relations which hold between the described situations. This investigation provides us with insights into the interaction of different knowledge sources like aspectual information as well as world and context knowledge. This investigation of German discourse sequences gives rise to the need for a time logic which is capable of expressing fine as well as coarse (or underspecified) temporal relations between situations. An account is presented to describe exhaustively all conceivable temporal relations within a computationally tractable reasoning system, based on the interval calculus by James Allen. However, in order to establish a coherent discourse for larger sequences, the hierarchical structure of a narrative has to be considered as well. I propose a Tree Description Grammar — a further development of Tree Adjoining Grammars — for parsing the given discourse structure, and stipulate discourse principles which give an explanation for the way a discourse should be processed. I furthermore discuss how a discourse grammar needs to distinguish between discourse structure and discourse processing. The latter term can be understood as navigating through a discourse tree, and reflects the process of how a discourse is comprehended. Finally, a small fragment of German is given which shows how the discourse grammar can be applied to short discourse sequences of four to seven sentences. The conclusion discusses the outcome of the analysis conducted in this thesis and proposes likely areas of future research.
2

Stylisation temporellement cohérente d'animations 3D basée sur des textures / Temporally coherent stylization of 3D animations based on textures

Bénard, Pierre 07 July 2011 (has links)
Cette thèse s'inscrit dans le thème du rendu expressif qui vise à définir des outils de création et de traitement d'images ou d'animations stylisées. Les applications concernent tous les métiers nécessitant une représentation visuelle plus stylisée qu'une photographie : création artistique (jeux vidéo, film d'animation, dessins animés), restitution archéologique, documentation technique, etc. Un critère fondamental de qualité d'une image est l'absence d'artefacts visuels. Cette considération a toujours existé, mais elle est particulièrement importante dans le cas de l'informatique graphique. En effet, la nature même de l'image – des pixels discrets – est source d'artefacts. Les artefacts sont encore plus visibles lorsque l'on s'intéresse aux animations, des artefacts temporels s'ajoutant aux artefacts spatiaux. L'objectif de cette thèse est d'une part de formaliser et mesurer ces artefacts en tenant compte de la perception humaine, et d'autre part de proposer de nouvelles méthodes de stylisation interactive d'animations 3D. Nous présentons tout d'abord un ensemble de techniques pour créer et assurer la cohérence de dessins au trait extraits de scènes 3D animées. Nous proposons ensuite deux méthodes de stylisation des régions de couleur permettant la créations d'un grand nombre de motifs. Le point commun à toutes ces approches est la représentation du médium simulé (pigment d'aquarelle, coup de crayon ou de pinceau...) par une texture évoluant au cours de l'animation. Nous décrivons enfin deux expériences utilisateurs visant à évaluer perceptuellement la qualité des résultats produits par ce type de techniques. / This PhD thesis deals with expressive rendering, a sub-field of computer graphics which aims at defining creation and processing tools to stylize images and animations. It has applications in all the fields that need depictions more stylized than photographs, such as entertainment (e.g., video games, animated films, cartoons), virtual heritage, technical illustration, etc. A crucial criterion to assert the quality of an image is the absence of visual artifacts. While already true for traditional art, this consideration is especially important in computer graphics. Indeed the intrinsic discrete nature of an image can lead to artifacts. This is even more noticeable during animations, as temporal artifacts are added to spatial ones. The goal of this thesis is twofold: (1) To formalize and measure these artifacts by taking into account human perception; (2) To propose new interactive methods to stylize 3D animations. First we present a set of techniques to ensure the coherence of line drawings extracted form 3D animated scenes. Then we propose two methods to stylize shaded regions, which allow to create a wide variety of patterns. The shared ground layer of all these approaches is the use of temporally varying textures to represent the simulated media (e.g., watercolor pigments, brush strokes). Finally we describe two user studies aiming at evaluating the quality of the results produced by such techniques.
3

Representation learning with a temporally coherent mixed-representation

Parkinson, Jon January 2017 (has links)
Guiding a representation towards capturing temporally coherent aspects present invideo improves object identity encoding. Existing models apply temporal coherenceuniformly over all features based on the assumption that optimal encoding of objectidentity only requires temporally stable components. We test the validity of this assumptionby exploring the effects of applying a mixture of temporally coherent invariantfeatures, alongside variable features, in a single 'mixed' representation. Applyingtemporal coherence to different proportions of the available features, we evaluate arange of models on a supervised object classification task. This series of experimentswas tested on three video datasets, each with a different complexity of object shape andmotion. We also investigated whether a mixed-representation improves the capture ofinformation components associated with object position, alongside object identity, ina single representation. Tests were initially applied using a single layer autoencoderas a test bed, followed by subsequent tests investigating whether similar behaviouroccurred in the more abstract features learned by a deep network. A representationapplying temporal coherence in some fashion produced the best results in all tests,on both single layered and deep networks. The majority of tests favoured a mixed representation,especially in cases where the quantity of labelled data available to thesupervised task was plentiful. This work is the first time a mixed-representation hasbeen investigated, and demonstrates its use as a method for representation learning.
4

Effet de la stimulation rythmique audio-tactile sur les mouvements de coordination / Effect of audio-tactile rhythmic stimulation on the coordination of movements

Roy, Charlotte 24 March 2017 (has links)
Notre capacité à intégrer des informations venant de nos différents sens est fondamentale pour produire et réguler les mouvements de notre corps. L’objectif général de cette thèse est d’étudier les effets des déterminants de l’intégration multisensorielle sur nos comportements sensorimoteurs rythmiques. Les effets de l’intégration multisensorielle sur ce type de comportements sont peu connus car peu étudiés. Ces comportements caractérisent pourtant la majorité de nos activités quotidiennes comme marcher, écrire ou encore lors de la pratique d’activités sportives. Jusqu’à présent les processus multisensoriels ont été étudiés principalement dans le cadre de nos capacités de discrimination et de détection, révélant notamment la nécessaire synchronie temporelle entre les modalités pour leur intégration. Les conséquences de cette cohérence temporelle et des mécanismes associés n’ont jamais été testées sur les comportements sensorimoteurs rythmiques. Nous chercherons donc à généraliser leurs effets sur ces comportements. Par ailleurs, la littérature rapporte que les caractéristiques du mouvement modifient le traitement des informations sensorielles et semblent également influencer l’intégration multisensorielle. Nous testerons ainsi l’effet des caractéristiques de stabilité du système sensorimoteur, i.e. stabilité intrinsèque de la marche, sur l’intégration multisensorielle.Les deux contributions de cette thèse sont les suivantes : (1) Les comportements rythmiques obéissent aux mêmes principes que les comportements de discrimination temporelle ou de détection. Nos résultats généralisent les effets de cohérence temporelle et montrent pour la première fois l’effet de bénéfice multisensoriel sur la marche. (2) Nous formulons une hypothèse novatrice de compensation sensorimotrice, qui souligne l’utilisation adaptée des informations multisensorielles par le système sensorimoteur. Ce dernier compense la diminution de stabilité intrinsèque de la marche par une plus grande et/ou meilleure utilisation des informations externes audio-tactiles. / Our ability to merge information coming from several senses is crucial to produce and regulate our body movements. The main objective of this thesis is to study the effects of multisensory integration factors on our sensorimotor rhythmic behaviours. The multisensory integration effects on these behaviours are not well understood, being seldom studied. However, those behaviours characterize most of our daily activities such as walking, writing or doing sports. So far, multisensory processes have essentially been studied with regard to our discrimination and detection skills, highlighting the necessity of a temporal synchrony between senses for their integration. The consequences of this temporal coherence and the associated mechanisms have never been tested on sensorimotor rhythmic behaviours. Thus, we will extend their effects to these behaviours. Besides, literature shows that the movements’ features modify the processing of sensory information and can influence multisensory integration. We will test the effects of the stability of the sensorimotor system, i.e. intrinsic stability of gait, on the multisensory integration.The two main contributions of the thesis are the following ones: (1) Rhythmic behaviours obey the same principles as temporal discrimination and detection behaviours. Our results generalize the effects of temporal coherence and show for the first time a multisensory benefit on gait. (2) We suggest a novel sensorimotor compensatory assumption, highlighting the adaptive use of multisensory information by the sensorimotor system, which compensates the decrease of the intrinsic stability of the gait with a larger and/or better use of external audio-tactile information.
5

Video view interpolation using temporally adaptive 3D meshes / Interpolação de vistas em video utilizando malhas 3D adaptativas

Fickel, Guilherme Pinto January 2015 (has links)
Esta tese apresenta um novo método para interpolação de vistas em vídeos usando câmeras ao longo de um baseline baseado em uma triangulação 2D. A imagem de referência é primeiramente particionada em regiões triangulares usando informação de bordas e escala, visando colocar vértices ao longo das bordas da imagem e aumentar o número de triângulos em regiões texturadas. Um algoritmo de casamento de regiões é então usado para encontrar a disparidade inicial de cada triângulo, e uma etapa de refinamento é aplicada para mudar a disparidade nos vértices dos triângulos, gerando um mapa de disparidade linear em trechos. Uma simples etapa de pós-processamento é aplicada para conectar os triângulos com disparidade semelhante, gerando uma malha 3D relacionada a cada câmera, que são usadas para gerar novas vistas sintéticas ao longo do mesmo baseline das câmeras. Para gerar vistas com menos artefatos temporais (flickering), foi proposta uma abordagem para atualizar a malha 3D inicial dinamicamente, movendo, removendo e inserindo vértices a cada quadro baseado no fluxo óptico. Esta abordagem permite relacionar triângulos da malha ao longo do tempo, e uma combinação de Modelo Oculto de Markov, aplicado nos triângulos que persistem ao longo do tempo, com Filtro de Kalman, aplicado nos vértices, permite a geração de uma mapa de disparidade com coerência temporal. Com a abordagem proposta, o processo de gerar vistas interpoladas se reduz à trivial tarefa de renderizar uma malha poligonal, algo que pode ser feito muito rapidamente, principalmente quando placas gráficas são utilizadas. Além disso, as vistas geradas não possuem buracos, diferente de muitas técnicas de interpolação de vistas baseadas em pixels que requerem procedimentos de pós-processamento para preencher buracos. Os resultados experimentais indicam que a abordagem proposta foi capaz de gerar vistas interpoladas visualmente coerentes em vídeos desafiadores, com luz natural e movimento de câmera. Além disso, uma avaliação quantitativa usando métricas de qualidade de vídeos mostrou que as sequências de video interpoladas são melhores que abordagens competitivas. / This thesis presents a new method for video view interpolation using multiview linear camera arrays based on 2D domain triangulation. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and to increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect the triangles with similar disparities, generating a full 3D mesh related to each camera (view), which are used to generate the new synthesized views along the cameras baseline. In order to generate views with less temporal flickering artifacts, we propose a scheme to update the initial 3D mesh dynamically, by moving, deleting and inserting vertices at each frame based on optical flow. This approach allows to relate triangles of the mesh across time, and a combination of Hidden Markov Models (HMMs), applied to time-persistent triangles, with the Kalman Filter, applied to vertices, so that temporal consistency can also be obtained. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes. Experimental results indicate that our approach was able to generate visually coherent in-between interpolated views for challenging, real-world videos with natural lighting and camera movement.
6

Video view interpolation using temporally adaptive 3D meshes / Interpolação de vistas em video utilizando malhas 3D adaptativas

Fickel, Guilherme Pinto January 2015 (has links)
Esta tese apresenta um novo método para interpolação de vistas em vídeos usando câmeras ao longo de um baseline baseado em uma triangulação 2D. A imagem de referência é primeiramente particionada em regiões triangulares usando informação de bordas e escala, visando colocar vértices ao longo das bordas da imagem e aumentar o número de triângulos em regiões texturadas. Um algoritmo de casamento de regiões é então usado para encontrar a disparidade inicial de cada triângulo, e uma etapa de refinamento é aplicada para mudar a disparidade nos vértices dos triângulos, gerando um mapa de disparidade linear em trechos. Uma simples etapa de pós-processamento é aplicada para conectar os triângulos com disparidade semelhante, gerando uma malha 3D relacionada a cada câmera, que são usadas para gerar novas vistas sintéticas ao longo do mesmo baseline das câmeras. Para gerar vistas com menos artefatos temporais (flickering), foi proposta uma abordagem para atualizar a malha 3D inicial dinamicamente, movendo, removendo e inserindo vértices a cada quadro baseado no fluxo óptico. Esta abordagem permite relacionar triângulos da malha ao longo do tempo, e uma combinação de Modelo Oculto de Markov, aplicado nos triângulos que persistem ao longo do tempo, com Filtro de Kalman, aplicado nos vértices, permite a geração de uma mapa de disparidade com coerência temporal. Com a abordagem proposta, o processo de gerar vistas interpoladas se reduz à trivial tarefa de renderizar uma malha poligonal, algo que pode ser feito muito rapidamente, principalmente quando placas gráficas são utilizadas. Além disso, as vistas geradas não possuem buracos, diferente de muitas técnicas de interpolação de vistas baseadas em pixels que requerem procedimentos de pós-processamento para preencher buracos. Os resultados experimentais indicam que a abordagem proposta foi capaz de gerar vistas interpoladas visualmente coerentes em vídeos desafiadores, com luz natural e movimento de câmera. Além disso, uma avaliação quantitativa usando métricas de qualidade de vídeos mostrou que as sequências de video interpoladas são melhores que abordagens competitivas. / This thesis presents a new method for video view interpolation using multiview linear camera arrays based on 2D domain triangulation. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and to increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect the triangles with similar disparities, generating a full 3D mesh related to each camera (view), which are used to generate the new synthesized views along the cameras baseline. In order to generate views with less temporal flickering artifacts, we propose a scheme to update the initial 3D mesh dynamically, by moving, deleting and inserting vertices at each frame based on optical flow. This approach allows to relate triangles of the mesh across time, and a combination of Hidden Markov Models (HMMs), applied to time-persistent triangles, with the Kalman Filter, applied to vertices, so that temporal consistency can also be obtained. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes. Experimental results indicate that our approach was able to generate visually coherent in-between interpolated views for challenging, real-world videos with natural lighting and camera movement.
7

Video view interpolation using temporally adaptive 3D meshes / Interpolação de vistas em video utilizando malhas 3D adaptativas

Fickel, Guilherme Pinto January 2015 (has links)
Esta tese apresenta um novo método para interpolação de vistas em vídeos usando câmeras ao longo de um baseline baseado em uma triangulação 2D. A imagem de referência é primeiramente particionada em regiões triangulares usando informação de bordas e escala, visando colocar vértices ao longo das bordas da imagem e aumentar o número de triângulos em regiões texturadas. Um algoritmo de casamento de regiões é então usado para encontrar a disparidade inicial de cada triângulo, e uma etapa de refinamento é aplicada para mudar a disparidade nos vértices dos triângulos, gerando um mapa de disparidade linear em trechos. Uma simples etapa de pós-processamento é aplicada para conectar os triângulos com disparidade semelhante, gerando uma malha 3D relacionada a cada câmera, que são usadas para gerar novas vistas sintéticas ao longo do mesmo baseline das câmeras. Para gerar vistas com menos artefatos temporais (flickering), foi proposta uma abordagem para atualizar a malha 3D inicial dinamicamente, movendo, removendo e inserindo vértices a cada quadro baseado no fluxo óptico. Esta abordagem permite relacionar triângulos da malha ao longo do tempo, e uma combinação de Modelo Oculto de Markov, aplicado nos triângulos que persistem ao longo do tempo, com Filtro de Kalman, aplicado nos vértices, permite a geração de uma mapa de disparidade com coerência temporal. Com a abordagem proposta, o processo de gerar vistas interpoladas se reduz à trivial tarefa de renderizar uma malha poligonal, algo que pode ser feito muito rapidamente, principalmente quando placas gráficas são utilizadas. Além disso, as vistas geradas não possuem buracos, diferente de muitas técnicas de interpolação de vistas baseadas em pixels que requerem procedimentos de pós-processamento para preencher buracos. Os resultados experimentais indicam que a abordagem proposta foi capaz de gerar vistas interpoladas visualmente coerentes em vídeos desafiadores, com luz natural e movimento de câmera. Além disso, uma avaliação quantitativa usando métricas de qualidade de vídeos mostrou que as sequências de video interpoladas são melhores que abordagens competitivas. / This thesis presents a new method for video view interpolation using multiview linear camera arrays based on 2D domain triangulation. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and to increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect the triangles with similar disparities, generating a full 3D mesh related to each camera (view), which are used to generate the new synthesized views along the cameras baseline. In order to generate views with less temporal flickering artifacts, we propose a scheme to update the initial 3D mesh dynamically, by moving, deleting and inserting vertices at each frame based on optical flow. This approach allows to relate triangles of the mesh across time, and a combination of Hidden Markov Models (HMMs), applied to time-persistent triangles, with the Kalman Filter, applied to vertices, so that temporal consistency can also be obtained. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes. Experimental results indicate that our approach was able to generate visually coherent in-between interpolated views for challenging, real-world videos with natural lighting and camera movement.
8

An Empirically Based Stochastic Turbulence Simulator with Temporal Coherence for Wind Energy Applications

Rinker, Jennifer Marie January 2016 (has links)
<p>In this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields. </p><p>In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.</p><p>The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.</p><p>This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.</p><p>The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.</p><p>The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.</p><p>EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.</p><p>Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.</p><p>The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.</p><p>This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.</p><p>The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.</p> / Dissertation
9

Vers les lasers XUV femtosecondes : étude des propriétés spectrales et temporelles de l'amplification de rayonnement XUV dans un plasma / Toward X-ray lasers : study of the spectral and temporal properties of X-ray radiation amplification in a plasma

Le Marec, Andréa 19 October 2016 (has links)
Cette thèse s’inscrit dans le contexte des travaux visant à réduire la durée d’impulsion des lasers XUV générés dans des plasmas au domaine femtoseconde. La bande spectrale très étroite du milieu amplificateur limite la durée minimum accessible (limite de Fourier). Le milieu amplificateur des lasers XUV sont des plasmas denses et chauds qui peuvent être créés aussi bien par décharge électrique rapide que par différents types de lasers de puissance. Il existe ainsi 4 types de sources lasers XUV distinctes dont les paramètres du plasma (densité, température) dans la zone de gain diffèrent. Or, les propriétés spectrales et temporelles du rayonnement émis sont fortement liées à ces paramètres. L’ensemble des 4 types de lasers XUV opèrent en mode d'amplification de l'émission spontanée (ASE) et 2 d'entre eux peuvent opérer en mode « injecté ». Cette technique consiste à injecter une impulsion harmonique d'ordre élevé femtoseconde, résonante avec la transition laser, à l'une des extrémités du plasma amplificateur. L'important désaccord entre la largeur spectrale du plasma et celle de l'harmonique ne permet pas de conserver la durée fs de cette dernière au cours de l'amplification. Les simulations (code Bloch-Maxwell COLAX) montrent que l'amplification est fortement non-linéaire dans ces systèmes, avec notamment l'apparition d’oscillations de Rabi. La génération d'oscillations de Rabi dans des lasers XUV en mode injecté est actuellement considérée comme un moyen prometteur de produire des lasers XUV fs, mais la manifestation de ces dernières n’a toutefois encore jamais été mise en évidence expérimentalement. Ainsi, une méticuleuse caractérisation expérimentale des propriétés spectrales des 4 types de lasers XUV en relation avec les conditions du plasma, associée à une meilleure compréhension des mécanismes d’amplification sous différentes conditions plasma basée sur des études théoriques et des simulations, sont nécessaires pour atteindre notre objectif. Une large campagne expérimentale visant à caractériser spectralement l'ensemble des différents types de lasers XUV a été menée par notre groupe sur la dernière décennie. La résolution spectrale nécessaire n'étant pas accessible avec les spectromètres actuels, la méthode employée consiste à mesurer la cohérence temporelle du laser XUV par autocorrélation du champ électrique à l'aide d'un interféromètre à division de front d'onde, spécifiquement conçu pour ces mesures, à partir desquelles la largeur spectrale peut être déduite. Le dernier type de laser XUV (PALS, Prague) a été caractérisé dans le cadre de cette thèse. Le temps de cohérence mesuré est de 0,68 ps, significativement inférieur aux valeurs mesurées sur les autres types de lasers XUV. L'analyse de l'ensemble des mesures a fait apparaître un comportement différent suivant que la durée d’impulsion est longue devant le temps de cohérence ou proche de celui-ci. Dans le premier cas les largeurs spectrales déduites sont en bon accord avec les calculs, dans le second l’accord est moins bon et la forme des traces d'autocorrélation n'était pas comprise. Ces observations ont motivé une étude détaillée de l'influence des propriétés temporelles de l'émission ASE des lasers XUV sur la méthode interférométrique employée pour caractériser leur largeur spectrale. Cette étude, basée sur un modèle emprunté aux lasers à électrons libres, a révélé un effet de la cohérence temporelle partielle sur les mesures d'autocorrélation en champ de ces sources. Elle ouvre des perspectives sur l'utilisation de notre méthode pour une mesure simultanée de la largeur spectrale et de la durée d'impulsion de la source. Enfin, une étude basée sur un modèle Bloch-Maxwell a été réalisée pour tenter de mieux comprendre les conditions d'apparition des oscillations de Rabi au cours de l'amplification de l'harmonique dans le plasma de laser XUV. Deux régimes d'amplification, adiabatique et dynamique, autour d'un seuil d'inversion de population ont été mis en évidence. / The work of this thesis was made in the context of the efforts made to reduce the pulse duration of plasma-based XUV lasers down to the femtosecond domain. The very narrow spectral width of the amplifier medium (~ 1E10 - 1E11 Hz) limits the minimum achievable pulse duration (Fourier limit). The amplifier medium of XUV lasers pumped by collisional excitation are dense and hot plasmas that can be created both by rapid electrical discharge and by different types of power lasers. There are thus 4 distinct types of XUV laser sources with different plasma parameters (density, temperature) in the gain region. Yet, the spectral and temporal properties of the emitted radiation are strongly linked to these parameters. All 4types of XUV lasers operate in amplification of spontaneous emission (ASE) mode, and 2 of them, for a few years, can operate in "seeded" mode. This technique consists in injecting a femtosecond high order harmonic pulse (the seed), resonant with the lasing transition, at one extremity of the plasma amplifier. Because of the major mismatch between the spectral width of the plasma and that of the seed the femtosecond duration of the latter is not preserved during amplification. Simulations (COLAX Maxwell-Bloch code) show that the amplification is highly non-linear in such systems, including the appearance of Rabi oscillations. Generating Rabi oscillations in seeded XUV lasers is currently considered a promising way to produce femtosecond XUV lasers. However Rabi oscillations have yet never been experimentally demonstrated. Thus, a meticulous experimental characterization of the spectral properties of the 4 types of XUV lasers in connection with the plasma conditions, combined with a better understanding of amplification mechanisms under different theoretical plasma conditions based on studies and simulations are needed to reach our goal. A wide experimental campaign aiming to spectrally characterize all different types of XUV lasers was conducted by our group over the past decade. The required spectral resolution is not available with the best current spectrometers, so the method we used consists on the measurement of the temporal coherence of the XUV laser through an electric field autocorrelation, using a wave front-division interferometer that was specifically designed for these measures, from which the spectral width can be deduced. The latter type of the four XUV laser types (PALS, Prague) was characterized during this thesis, closing this experimental campaign. The measured coherence time was 0.68 ps, which is significantly lower than the coherence times measured on the other XUV laser types. Analysis of the overall results revealed two different behavior whether the XUV laser has a long pulse duration compared to its coherence time or if the two durations are close. In the first case the inferred spectral widths are in good agreement with theoretical predictions, while in the second case the agreement was not as good and the shape of the electric field autocorrelation traces was not understood. This observation has prompted a detailed study of the influence of temporal properties of ASE XUV lasers on the interferometric methodology used to determine the spectral width of XUV lasers. The study, based on a model developed for X-free electron lasers, revealed an effect of partial temporal coherence in electric field autocorrelation measures of these sources. This study offers perspectives on a simultaneous measure of the spectral width and the duration of theses sources with our method. Finally, a study based on Maxwell-Bloch equations was carried out in order to understand better the conditions of apparition of Rabi oscillations. This study highlighted two amplification regimes, adiabatic and dynamic, around a population inversion threshold.
10

Non-photorealistic rendering with coherence for augmented reality

Chen, Jiajian 16 July 2012 (has links)
A seamless blending of the real and virtual worlds is key to increased immersion and improved user experiences for augmented reality (AR). Photorealistic and non-photorealistic rendering (NPR) are two ways to achieve this goal. Non-photorealistic rendering creates an abstract and stylized version of both the real and virtual world, making them indistinguishable. This could be particularly useful in some applications (e.g., AR/VR aided machine repair, or for virtual medical surgery) or for certain AR games with artistic stylization. Achieving temporal coherence is a key challenge for all NPR algorithms. Rendered results are temporally coherent when each frame smoothly and seamlessly transitions to the next one without visual flickering or artifacts that distract the eye from perceived smoothness. NPR algorithms with coherence are interesting in both general computer graphics and AR/VR areas. Rendering stylized AR without coherence processing causes the final results to be visually distracting. While various NPR algorithms with coherence support have been proposed in general graphics community for video processing, many of these algorithms require thorough analysis of all frames of the input video and cannot be directly applied to real-time AR applications. We have investigated existing NPR algorithms with coherence in both general graphics and AR/VR areas. These algorithms are divided into two categories: Model Space and Image Space. We present several NPR algorithms with coherence for AR: a watercolor inspired NPR algorithm, a painterly rendering algorithm, and NPR algorithms in the model space that can support several styling effects.

Page generated in 0.1004 seconds