• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 1
  • Tagged with
  • 13
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Information loss in black holes and the unitarity of quantum mechanics /

Cozzella, Gabriel. January 2016 (has links)
Orientador: George Emanuel Avraam Matsas / Resumo: A teoria quântica de campos em espaços-tempos curvos é o arcabouço teórico mais sólidoque temos para estudar a interação entre gravitação e mecânica quântica na ausência deuma teoria completa de gravitação quântica. Neste contexto, um problema que atraiu muitaatenção dos físicos teóricos nas últimas décadas é o chamado "paradoxo da perda de informação em buracos negros", onde a evolução de um estado quântico puro inicial para um estado quântico misto final caracterizaria uma violação das leis da mecânica quântica. Nesta dissertação nós argumentamos que a perda de informação em si não viola as leis da mecânica quântica e é consequência direta da teoria semi-clássica utilizada. Finalmente, argumentamos que a questão da recuperação da informação deve ser tratada utilizando-se uma teoria de gravitação quântica ainda desconhecida. / Abstract: The quantum theory of fields in curved space-times is the most solid framework for studying the interplay between gravity and quantum mechanics in the absence of a complete theory of quantum gravity. In this scenario, one problem that has drawn much attention from the theoretical physics community in the last decades is the so-called "black hole information loss paradox", where the evolution from an initial pure quantum state to a final mixed quantum state would constitute a violation of the laws of quantum mechanics. In this dissertation we argue that information loss does not violate quantum mechanics, being simply a consequence of the semi-classical framework adopted and that the question of information recovery needs to be addressed by a yet unknown theory of quantum gravity. / Mestre
2

Twisting and Gluing : On Topological Field Theories, Sigma Models and Vertex Algebras

Källén, Johan January 2012 (has links)
This thesis consists of two parts, which can be read separately. In the first part we study aspects of topological field theories. We show how to topologically twist three-dimensional N=2 supersymmetric Chern-Simons theory using a contact structure on the underlying manifold. This gives us a formulation of Chern-Simons theory together with a set of auxiliary fields and an odd symmetry. For Seifert manifolds, we show how to use this odd symmetry to localize the path integral of Chern-Simons theory. The formulation of three-dimensional Chern-Simons theory using a contact structure admits natural generalizations to higher dimensions. We introduce and study these theories. The focus is on the five-dimensional theory, which can be understood as a topologically twisted version of N=1 supersymmetric Yang-Mills theory. When formulated on contact manifolds that are circle fibrations over a symplectic manifold, it localizes to contact instantons. For the theory on the five-sphere, we show that the perturbative part of the partition function is given by a matrix model. In the second part of the thesis, we study supersymmetric sigma models in the Hamiltonian formalism, both in a classical and in a quantum mechanical setup. We argue that the so called Chiral de Rham complex, which is a sheaf of vertex algebras, is a natural framework to understand quantum aspects of supersymmetric sigma models in the Hamiltonian formalism. We show how a class of currents which generate symmetry algebras for the classical sigma model can be defined within the Chiral de Rham complex framework, and for a six-dimensional Calabi-Yau manifold we calculate the equal-time commutators between the currents and show that they generate the Odake algebra.
3

Compressão de ruído quântico em um interferômetro Sagnac em fibra com laser pulsado em 1,55 microns

PLAZAS ORTEGA, Lucero 24 February 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-02-15T13:05:07Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Compressão de Ruído Quantico em um Interferometro Sagnac em Fibra com um Laser Pulsado em 1,55 Microns - Dissertação de Mestrado em Física - Lucero Plazas Ortega.pdf: 7191693 bytes, checksum: 4eba611a9cee045f7f06747bb63a80cb (MD5) / Made available in DSpace on 2017-02-15T13:05:07Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Compressão de Ruído Quantico em um Interferometro Sagnac em Fibra com um Laser Pulsado em 1,55 Microns - Dissertação de Mestrado em Física - Lucero Plazas Ortega.pdf: 7191693 bytes, checksum: 4eba611a9cee045f7f06747bb63a80cb (MD5) Previous issue date: 2015-02-24 / CAPES / Componentesemquadraturadocampoeletromagn´eticoobedecemaoprincipio de incerteza de Heisenberg. Como resultado a Mecˆanica Quˆantica prevˆe que estas componentes n˜ao podem ser especificadas simultaneamente com precis˜ao ilimitada. Estados de incerteza m´ınima (estados coerentes) permitem atingir o menor produto das incertezas em cada quadratura, ambas de mesma magnitude. Al´em disto ´e poss´ıvel construir estados quˆanticos comprimidos, para os quais uma das suas quadraturas atinge um valor menor para o desvio padr˜ao queaqueledefinidoparaumestadodem´ınimaincerteza,incrementandoconsequentemente a incerteza na outra quadratura. Nesta disserta¸c˜ao fazemos uma revis˜ao destes conceitos e apresentamos um esquema experimental com o qual geramosestadoscomprimidos(squeezedstates)daradia¸c˜ao. Este´ebaseadoem umlaserdefibrapulsado(comprimentodeondaλ = 1.56µm, taxaderepeti¸c˜ao fR = 146MHz, largura de pulso τp = 200fs) associado a um interferˆometro de Sagnac n˜ao linear em fibra. Al´em de uma descri¸c˜ao dos principais componentes ´opticos ser˜ao discutidos os mecanismos de gera¸c˜ao e caracteriza¸c˜ao dos estados comprimidos. / QuadraturecomponentsofthefieldobeyaHeisenberguncertaintyrelation. As a result, quantum mechanics predicts that these components cannot be specified simultaneously with unlimited accuracy. Minimum uncertainty states or coherent states, allow reaching the lowest value of the uncertainty in each quadrature component, both having the same magnitude. It is still possible to construct squeezed quantum states, for which, one of its quadrature components achieves a less value for standard deviation than one that have a state of minimum uncertainty, with the consequence of an increase in the uncertainty of the other quadrature. In this dissertation, these concepts are reviewed and we provide an experimental scheme in order to generate squeezed states of light. This design is based on a pulsed fiber laser (λ = 1.56µm, repetition rate fR = 146MHz, pulse width τp = 200fs) coupled with a nonlinear interferometer, which is composed by a fiber loop in a Sagnac configuration. In order to discuss the experimental results obtained with this scheme, will be carried out adescriptionofthemainopticalcomponents,takingintoaccounttheprincipal mechanisms of generating and measuring of squeezed states.
4

Développement d'algorithmes temps réel de traitement de séquences d'images animées biomédicales pour la thermothérapie guidée par IRM

Maclair, Grégory 05 December 2008 (has links)
L'hyperthermie locale guidée par IRM s'est développée en pratique clinique au cours des dernières années. Le principe repose sur la destruction des cellules tumorales grâce à une élévation de température pendant une durée suffisante. L'acquisition d'images IRM simultanément au chauffage permet un monitorage en ligne de la température et un contrôle de la destruction cellulaire induite par l'hyperthermie appliquée aux tissus.Le principal objectif de cette thèse est de mettre en oeuvre des stratégies permettant de corriger les artefacts induits par la respiration ou le battement cardiaque pour les organes mobiles (foie, rein, coeur) et pour les organes non-mobiles (seins). La contribution principale de ce travail de thèse consiste à proposer une méthode de modélisation des champs de déplacements à l'aide d'une Analyse en Composantes Principales (ACP) en s'appuyant sur le caractère périodique des phénomènes physiologiques observés. / During the last years MR-guided lo cal hyp erthermia has b een develop edfor the clinical practice. Hyp erthermia is based on the destruction of tumorcells due to an increased temp erature over a sufficient amount of time. The si-multaneous acquisition of MR images during the heating pro cess allows on-linemonitoring of the temp erature and a control of the tissue destruction inducedby the hyp erthermia treatment. The on-line calculation of the temp erature isnot a trivial problem. This is b ecause the physical and physiological pheno-mena like the respiration disturb the temp erature measurement. Hence, it isnecessary to develop strategies for real-time correction in order to deliver pre-cise temp erature maps. In this work, we prop ose several algorithms for imagetreatment enabling the correction of artefacts related to motion and magneticsusceptibility effects. These corrections will provide a reliable estimation of theefficiency of the hyp erthermia treatment.
5

Low-Energy Ion Escape from the Terrestrial Polar Regions

Engwall, Erik January 2009 (has links)
The contemporary terrestrial atmosphere loses matter at a rate of around 100,000 tons per year. A major fraction of the net mass loss is constituted by ions, mainly H+ and O+, which escape from the Earth’s ionosphere in the polar regions. Previously, the outflow has only been measured at low altitudes, but to understand what fraction actually escapes and does not return, the measurements should be conducted far from the Earth. However, at large geocentric distances the outflowing ions are difficult to detect with conventional ion instruments on spacecraft, since the spacecraft electrostatic potential normally exceeds the equivalent energy of the ions. This also means that little is known about the ion outflow properties and distribution in space far from the Earth. In this thesis, we present a new method to measure the outflowing low-energy ions in those regions where they previously have been invisible. The method is based on the detection by electric field instruments of the large wake created behind a spacecraft in a flowing, low-energy plasma. Since ions with low energy will create a larger wake, the method is more sensitive to light ions, and our measured outflow is essentially the proton outflow. Applying this new method on data from the Cluster spacecraft, we have been able to make an extensive statistical study of ion outflows from 5 to 19 Earth radii in the magnetotail lobes. We show that cold proton outflows dominate in these large regions of the magnetosphere in both flux and density. Our outflow values of low-energy protons are close to those measured at low altitudes, which confirms that the ionospheric outflows continue far back in the tail and contribute significantly to the magnetospheric content. We also conclude that most of the ions are escaping and not returning, which improves previous estimates of the global outflow. The total loss of protons due to high-latitude escape is found to be on the order of 1026 protons/s.
6

Information loss in black holes and the unitarity of quantum mechanics / Perda de informação em buracos negros e a unitariedade da mecânica quântica

Cozzella, Gabriel [UNESP] 26 July 2016 (has links)
Submitted by Gabriel Cozzella (cozzella@ift.unesp.br) on 2016-08-21T00:22:31Z No. of bitstreams: 1 MSc Dissertation.pdf: 773902 bytes, checksum: a2d2f3173a5f3ec0ab24f201d71929da (MD5) / Approved for entry into archive by Ana Paula Grisoto (grisotoana@reitoria.unesp.br) on 2016-08-24T16:43:46Z (GMT) No. of bitstreams: 1 cozzella_g_me_ift.pdf: 773902 bytes, checksum: a2d2f3173a5f3ec0ab24f201d71929da (MD5) / Made available in DSpace on 2016-08-24T16:43:46Z (GMT). No. of bitstreams: 1 cozzella_g_me_ift.pdf: 773902 bytes, checksum: a2d2f3173a5f3ec0ab24f201d71929da (MD5) Previous issue date: 2016-07-26 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / A teoria quântica de campos em espaços-tempos curvos é o arcabouço teórico mais sólido que temos para estudar a interação entre gravitação e mecânica quântica na ausência de uma teoria completa de gravitação quântica. Neste contexto, um problema que atraiu muita atenção dos físicos teóricos nas últimas décadas é o chamado “paradoxo da perda de informação em buracos negros”, onde a evolução de um estado quântico puro inicial para um estado quântico misto final caracterizaria uma violação das leis da mecânica quântica. Nesta dissertação nós argumentamos que a perda de informação em si não viola as leis da mecânica quântica e é consequência direta da teoria semi-clássica utilizada. Finalmente, argumentamos que a questão da recuperação da informação deve ser tratada utilizando-se uma teoria de gravitação quântica ainda desconhecida. / The quantum theory of fields in curved space-times is the most solid framework for studying the interplay between gravity and quantum mechanics in the absence of a complete theory of quantum gravity. In this scenario, one problem that has drawn much attention from the theoretical physics community in the last decades is the so-called “black hole information loss paradox”, where the evolution from an initial pure quantum state to a final mixed quantum state would constitute a violation of the laws of quantum mechanics. In this dissertation we argue that information loss does not violate quantum mechanics, being simply a consequence of the semi-classical framework adopted and that the question of information recovery needs to be addressed by a yet unknown theory of quantum gravity. / FAPESP: 2014/08684-9
7

Towards Dense Air Quality Monitoring : Time-Dependent Statistical Gas Distribution Modelling and Sensor Planning

Asadi, Sahar January 2017 (has links)
This thesis addresses the problem of gas distribution modelling for gas monitoring and gas detection. The presented research is particularly focused on the methods that are suitable for uncontrolled environments. In such environments, gas source locations and the physical properties of the environment, such as humidity and temperature may be unknown or only sparse noisy local measurements are available. Example applications include air pollution monitoring, leakage detection, and search and rescue operations. This thesis addresses how to efficiently obtain and compute predictive models that accurately represent spatio-temporal gas distribution. Most statistical gas distribution modelling methods assume that gas dispersion can be modelled as a time-constant random process. While this assumption may hold in some situations, it is necessary to model variations over time in order to enable applications of gas distribution modelling for a wider range of realistic scenarios. This thesis proposes two time-dependent gas distribution modelling methods. In the first method, a temporal (sub-)sampling strategy is introduced. In the second method, a time-dependent gas distribution modelling approach is presented, which introduces a recency weight that relates measurement to prediction time. These contributions are presented and evaluated as an extension of a previously proposed method called Kernel DM+V using several simulation and real-world experiments. The results of comparing the proposed time-dependent gas distribution modelling approaches to the time-independent version Kernel DM+V indicate a consistent improvement in the prediction of unseen measurements, particularly in dynamic scenarios under the condition that there is a sufficient spatial coverage. Dynamic scenarios are often defined as environments where strong fluctuations and gas plume development are present. For mobile robot olfaction, we are interested in sampling strategies that provide accurate gas distribution models given a small number of samples in a limited time span. Correspondingly, this thesis addresses the problem of selecting the most informative locations to acquire the next samples. As a further contribution, this thesis proposes a novel adaptive sensor planning method. This method is based on a modified artificial potential field, which selects the next sampling location based on the currently predicted gas distribution and the spatial distribution of previously collected samples. In particular, three objectives are used that direct the sensor towards areas of (1) high predictive mean and (2) high predictive variance, while (3) maximising the coverage area. The relative weight of these objectives corresponds to a trade-off between exploration and exploitation in the sampling strategy. This thesis discusses the weights or importance factors and evaluates the performance of the proposed sampling strategy. The results of the simulation experiments indicate an improved quality of the gas distribution models when using the proposed sensor planning method compared to commonly used methods, such as random sampling and sampling along a predefined sweeping trajectory. In this thesis, we show that applying a locality constraint on the proposed sampling method decreases the travelling distance, which makes the proposed sensor planning approach suitable for real-world applications where limited resources and time are available. As a real-world use-case, we applied the proposed sensor planning approach on a micro-drone in outdoor experiments. Finally, this thesis discusses the potential of using gas distribution modelling and sensor planning in large-scale outdoor real-world applications. We integrated the proposed methods in a framework for decision-making in hazardous inncidents where gas leakage is involved and applied the gas distribution modelling in two real-world use-cases. Our investigation indicates that the proposed sensor planning and gas distribution modelling approaches can be used to inform experts both about the gas plume and the distribution of gas in order to improve the assessment of an incident.
8

On Space-Time Trade-Off for Montgomery Multipliers over Finite Fields

Chen, Yiyang 04 1900 (has links)
La multiplication dans le corps de Galois à 2^m éléments (i.e. GF(2^m)) est une opérations très importante pour les applications de la théorie des correcteurs et de la cryptographie. Dans ce mémoire, nous nous intéressons aux réalisations parallèles de multiplicateurs dans GF(2^m) lorsque ce dernier est généré par des trinômes irréductibles. Notre point de départ est le multiplicateur de Montgomery qui calcule A(x)B(x)x^(-u) efficacement, étant donné A(x), B(x) in GF(2^m) pour u choisi judicieusement. Nous étudions ensuite l'algorithme diviser pour régner PCHS qui permet de partitionner les multiplicandes d'un produit dans GF(2^m) lorsque m est impair. Nous l'appliquons pour la partitionnement de A(x) et de B(x) dans la multiplication de Montgomery A(x)B(x)x^(-u) pour GF(2^m) même si m est pair. Basé sur cette nouvelle approche, nous construisons un multiplicateur dans GF(2^m) généré par des trinôme irréductibles. Une nouvelle astuce de réutilisation des résultats intermédiaires nous permet d'éliminer plusieurs portes XOR redondantes. Les complexités de temps (i.e. le délais) et d'espace (i.e. le nombre de portes logiques) du nouveau multiplicateur sont ensuite analysées: 1. Le nouveau multiplicateur demande environ 25% moins de portes logiques que les multiplicateurs de Montgomery et de Mastrovito lorsque GF(2^m) est généré par des trinômes irréductible et m est suffisamment grand. Le nombre de portes du nouveau multiplicateur est presque identique à celui du multiplicateur de Karatsuba proposé par Elia. 2. Le délai de calcul du nouveau multiplicateur excède celui des meilleurs multiplicateurs d'au plus deux évaluations de portes XOR. 3. Nous determinons le délai et le nombre de portes logiques du nouveau multiplicateur sur les deux corps de Galois recommandés par le National Institute of Standards and Technology (NIST). Nous montrons que notre multiplicateurs contient 15% moins de portes logiques que les multiplicateurs de Montgomery et de Mastrovito au coût d'un délai d'au plus une porte XOR supplémentaire. De plus, notre multiplicateur a un délai d'une porte XOR moindre que celui du multiplicateur d'Elia au coût d'une augmentation de moins de 1% du nombre total de portes logiques. / The multiplication in a Galois field with 2^m elements (i.e. GF(2^m)) is an important arithmetic operation in coding theory and cryptography. In this thesis, we focus on the bit- parallel multipliers over the Galois fields generated by trinomials. We start by introducing the GF(2^m) Montgomery multiplication, which calculates A(x)B(x)x^{-u} in GF(2^m) with two polynomials A(x), B(x) in GF(2^m) and a properly chosen u. Then, we investigate the rule for multiplicand partition used by a divide-and-conquer algorithm PCHS originally proposed for the multiplication over GF(2^m) with odd m. By adopting similar rules for splitting A(x) and B(x) in A(x)B(x)x^{-u}, we develop new Montgomery multiplication formulae for GF(2^m) with m either odd or even. Based on this new approach, we develop the corresponding bit-parallel Montgomery multipliers for the Galois fields generated by trinomials. A new bit-reusing trick is applied to eliminate redundant XOR gates from the new multiplier. The time complexity (i.e. the delay) and the space complexity (i.e. the logic gate number) of the new multiplier are explicitly analysed: 1. This new multiplier is about 25% more efficient in the number of logic gates than the previous trinomial-based Montgomery multipliers or trinomial-based Mastrovito multipliers on GF(2^m) with m big enough. It has a number of logic gates very close to that of the Karatsuba multiplier proposed by Elia. 2. While having a significantly smaller number of logic gates, this new multiplier is at most two T_X larger in the total delay than the fastest bit-parallel multiplier on GF(2^m), where T_X is the XOR gate delay. 3. We determine the space and time complexities of our multiplier on the two fields recommended by the National Institute of Standards and Technology (NIST). Having at most one more T_X in the total delay, our multiplier has a more-than-15% reduced logic gate number compared with the other Montgomery or Mastrovito multipliers. Moreover, our multiplier is one T_X smaller in delay than the Elia's multiplier at the cost of a less-than-1% increase in the logic gate number.
9

Champs aléatoires de Markov cachés pour la cartographie du risque en épidémiologie / Hidden Markov random fields for risk mapping in epidemiology

Azizi, Lamiae 13 December 2011 (has links)
La cartographie du risque en épidémiologie permet de mettre en évidence des régionshomogènes en terme du risque afin de mieux comprendre l’étiologie des maladies. Nousabordons la cartographie automatique d’unités géographiques en classes de risque commeun problème de classification à l’aide de modèles de Markov cachés discrets et de modèlesde mélange de Poisson. Le modèle de Markov caché proposé est une variante du modèle dePotts, où le paramètre d’interaction dépend des classes de risque.Afin d’estimer les paramètres du modèle, nous utilisons l’algorithme EM combiné à une approche variationnelle champ-moyen. Cette approche nous permet d’appliquer l’algorithmeEM dans un cadre spatial et présente une alternative efficace aux méthodes d’estimation deMonte Carlo par chaîne de Markov (MCMC).Nous abordons également les problèmes d’initialisation, spécialement quand les taux de risquesont petits (cas des maladies animales). Nous proposons une nouvelle stratégie d’initialisationappropriée aux modèles de mélange de Poisson quand les classes sont mal séparées. Pourillustrer ces solutions proposées, nous présentons des résultats d’application sur des jeux dedonnées épidémiologiques animales fournis par l’INRA. / The analysis of the geographical variations of a disease and their representation on a mapis an important step in epidemiology. The goal is to identify homogeneous regions in termsof disease risk and to gain better insights into the mechanisms underlying the spread of thedisease. We recast the disease mapping issue of automatically classifying geographical unitsinto risk classes as a clustering task using a discrete hidden Markov model and Poisson classdependent distributions. The designed hidden Markov prior is non standard and consists of avariation of the Potts model where the interaction parameter can depend on the risk classes.The model parameters are estimated using an EM algorithm and the mean field approximation. This provides a way to face the intractability of the standard EM in this spatial context,with a computationally efficient alternative to more intensive simulation based Monte CarloMarkov Chain (MCMC) procedures.We then focus on the issue of dealing with very low risk values and small numbers of observedcases and population sizes. We address the problem of finding good initial parameter values inthis context and develop a new initialization strategy appropriate for spatial Poisson mixturesin the case of not so well separated classes as encountered in animal disease risk analysis.We illustrate the performance of the proposed methodology on some animal epidemiologicaldatasets provided by INRA.
10

Multimodal high-resolution mapping of contracting intact Langendorff-perfused hearts

Schröder-Schetelig, Johannes 07 September 2020 (has links)
No description available.

Page generated in 0.0414 seconds