• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 40
  • 29
  • Tagged with
  • 216
  • 216
  • 75
  • 65
  • 46
  • 44
  • 44
  • 34
  • 18
  • 17
  • 16
  • 13
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Numerical simulation of multiphase immiscible flow on unstructured meshes

Jofre Cruanyes, Lluís 25 July 2014 (has links)
The present thesis aims at developing a basis for the numerical simulation of multiphase flows of immiscible fluids. This approach, although limited by the computational power of the present computers, is potentially very important, since most of the physical phenomena of these flows often happen on space and time scales where experimental techniques are impossible to be utilized in practice. In particular, this research is focused on developing numerical discretizations suitable for three-dimensional (3-D) unstructured meshes. In detail, the first chapter delimits the considered multiphase flows to the case in which the components are immiscible fluids. In particular, the focus is placed on those cases where two or more continuous streams of different fluids are separated by interfaces, and hence, correspondingly named separated flows. Additionally, once the type of flow is determined, the chapter introduces the physical characteristics and the models available to predict its behavior, as well as the mathematical formulation that sustains the numerical techniques developed within this thesis. The second chapter introduces and analyzes a new geometrical Volume-of-Fluid (VOF) method for capturing interfaces on 3-D Cartesian and unstructured meshes. The method reconstructs interfaces as first- and second-order piecewise planar approximations (PLIC), and advects volumes in a single unsplit Lagrangian-Eulerian (LE) geometrical algorithm based on constructing flux polyhedrons by tracing back the Lagrangian trajectories of the cell-vertex velocities. In this way, the situations of overlapping between flux polyhedrons are minimized. Complementing the previous chapter, the third one proposes a parallelization strategy for the VOF method. The main obstacle is that the computing costs are concentrated in the interface between fluids. Consequently, if the interface is not homogeneously distributed, standard domain decomposition (DD) strategies lead to imbalanced workload distributions. Hence, the new strategy is based on a load balancing process complementary to the underlying domain decomposition. Its parallel efficiency has been analyzed using up to 1024 CPU-cores, and the results obtained show a gain with respect to the standard DD strategy up to 12x, depending on the size of the interface and the initial distribution. The fourth chapter describes the discretization of the single-phase Navier-Stokes equations to later extend it to the case of multiphase immiscible flow. One of the most important characteristics of the discretization schemes, aside from accuracy, is their capacity to discretely conserve kinetic energy, specially when solving turbulent flow. Hence, this chapter analyzes the accuracy and conservation properties of two particular collocated and staggered mesh schemes. The extension of the numerical schemes suitable for the single-phase Navier-Stokes equations to the case of multiphase immiscible flow is developed in the fifth chapter. Particularly, while the numerical techniques for the simulation of turbulent flow have evolved to discretely preserve mass, momentum and, specially, kinetic energy, the mesh schemes for the discretization of multiphase immiscible flow have evolved to improve their stability and robustness. Therefore, this chapter presents and analyzes two particular collocated and staggered mesh discretizations, able to simulate multiphase immiscible flow, which favor the discrete conservation of mass, momentum and kinetic energy. Finally, the sixth chapter numerically simulates the Richtmyer-Meshkov (RM) instability of two incompressible immiscible liquids. This chapter is a general assessment of the numerical methods developed along this thesis. In particular, the instability has been simulated by means of a VOF method and a staggered mesh scheme. The corresponding numerical results have shown the capacity of the discrete system to obtain accurate results for the RM instability. / Aquesta tesi té com a objectiu desenvolupar una base per a la simulació numèrica de fluids multi-fase immiscibles. Aquesta estratègia, encara que limitada per la potència computacional dels computadors actuals, és potencialment molt important, ja que la majoria de la fenomenologia d'aquests fluids sovint passa en escales temporals i especials on les tècniques experimentals no poden ser utilitzades. En particular, aquest treball es centra en desenvolupar discretitzacions numèriques aptes per a malles no-estructurades en tres dimensions (3-D). En detall, el primer capítol delimita els casos multifásics considerats al cas en que els components són fluids immiscibles. En particular, la tesi es centra en aquells casos en que dos o més fluids diferents són separats per interfases, i per tant, corresponentment anomenats fluxos separats. A més a més, un cop el tipus de flux es determinat, el capítol introdueix les característiques físiques i els models disponibles per predir el seu comportament, així com també la formulació matemàtica i les tècniques numèriques desenvolupades en aquesta tesi. El segon capítol introdueix i analitza un nou mètode "Volume-of-Fluid" (VOF) apte per a capturar interfases en malles Cartesianes i no-estructurades 3-D. El mètode reconstrueix les interfases com aproximacions "piecewise planar approximations" (PLIC) de primer i segon ordre, i advecciona els volums amb un algoritme geomètric "unsplit Lagrangian-Eulerian" (LE) basat en construïr els poliedres a partir de les velocitats dels vèrtexs de les celdes. D'aquesta manera, les situacions de sobre-solapament entre poliedres són minimitzades. Complementant el capítol anterior, el tercer proposa una estratègia de paral·lelització pel mètode VOF. L'obstacle principal és que els costos computacionals estan concentrats en les celdes de l'interfase entre fluids. En conseqüència, si la interfase no està ben distribuïda, les estratègies de "domain decomposition" (DD) resulten en distribucions de càrrega desequilibrades. Per tant, la nova estratègia està basada en un procés de balanceig de càrrega complementària a la DD. La seva eficiència en paral·lel ha sigut analitzada utilitzant fins a 1024 CPU-cores, i els resultats obtinguts mostren uns guanys respecte l'estratègia DD de fins a 12x, depenent del tamany de la interfase i de la distribució inicial. El quart capítol descriu la discretització de les equacions de Navier-Stokes per a una sola fase, per després estendre-ho al cas multi-fase. Una de les característiques més importants dels esquemes de discretització, a part de la precisió, és la seva capacitat per conservar discretament l'energia cinètica, específicament en el cas de fluxos turbulents. Per tant, aquest capítol analitza la precisió i propietats de conservació de dos esquemes de malla diferents: "collocated" i "staggered". L'extensió dels esquemes de malla aptes per els casos de una sola fase als casos multi-fase es desenvolupa en el cinquè capítol. En particular, així com en el cas de la simulació de la turbulència les tècniques numèriques han evolucionat per a preservar discretament massa, moment i energia cinètica, els esquemes de malla per a la discretització de fluxos multi-fase han evolucionat per millorar la seva estabilitat i robustesa. Per lo tant, aquest capítol presenta i analitza dos discretitzacions de malla "collocated" i "staggered" particulars, aptes per simular fluxos multi-fase, que afavoreixen la conservació discreta de massa, moment i energia cinètica. Finalment, el capítol sis simula numèricament la inestabilitat de Richtmyer-Meshkov (RM) de dos fluids immiscibles i incompressibles. Aquest capítol es una prova general dels mètodes numèrics desenvolupats al llarg de la tesi. En particular, la inestabilitat ha sigut simulada mitjançant un mètode VOF i un esquema de malla "staggered". Els resultats numèrics corresponents han demostrat la capacitat del sistema discret en obtenir bons resultats per la inestabilitat RM.
72

Contributions to computed tomography image coding for JPEG2000

Muñoz Gómez, Juan 13 January 2014 (has links)
Avui dia, gràcies als avanços en la ciència mèdica, existeixen diverses tècniques d’imatges mèdiques destinades a tractar de revelar, diagnosticar o examinar una malaltia. Moltes d’aquestes tècniques produeixen grans quantitats de dades, especialment les modalitats de tomografia com- putada (CT), imatges per ressonància magnètica (MRI) i tomografia per emissió de positrons (PET). Per gestionar aquestes dades, els centres mèdics utilitzen PACS i l’estàndard DICOM per emmagatzemar, recuperar, distribuir i visualitzar imatges mèdiques. Com a resultat de l’alt cost d’emmagatzematge i transmissió d’imatges mèdiques digitals, la compressió de dades juga un paper clau. JPEG2000 és l’estat de l’art en tècniques de compressió d’imatges per a l’emmagatzematge i transmissió d’imatges mèdiques. És el més recent sistema de codificació inclòs en DICOM i propor- ciona algunes característiques que són interessants per a la codificació d’aquestes imatges. JPEG2000 permet l’ús de finestres d’interès, accés a diferents grandàries de la imatge o la decodificació una regió específica d’ella. Aquesta tesi aborda tres problemes diferents detectats en la codificació de CT. El primer prob- lema de la codificació d’aquestes imatges, és el soroll que tenen. Aquest soroll és produït per l’ús d’unes dosis baixes de radiació durant l’exploració, produint imatges de baixa qualitat i penalitzant el rendiment de la codificació. L’ús de diferents filtres de soroll, fa millorar la qualitat i també augmentar el rendiment de codificació. La segona qüestió que s’aborda en aquesta tesi, és l’ús de transformacions multi-component en la codificació de les CT. Depenent de la correlació entre les diferents imatges que formen una CT, el rendiment en la codificació usant aquestes transformacions pot variar, fins i tot disminuir pel que fa a JPEG2000. Finalment, l’última contribució d’aquesta tesi tracta sobre el paradigma de la codificació diagnòstica sense pèrdua, i proposa un nou mètode ivde segmentació. A través de la utilització de mètodes de segmentació, per detectar l’àrea biològica i descartar la zona no-biològica, JPEG2000 pot aconseguir millores de rendiment de més de 2 bpp. / Hoy en día, gracias a los avances en la ciencia médica, existen diversas técnicas de imágenes médicas destinadas a tratar de revelar, diagnosticar o examinar una enfermedad. Muchas de estas técnicas producen grandes cantidades de datos, en especial las modalidades de tomografía com- putarizada (CT), imágenes por resonancia magnética (MRI) y tomografía por emisión de positrones (PET). Para gestionar estos datos, los centros médicos utilizan PACS y el estándar DICOM para almacenar, recuperar, distribuir y visualizar imágenes médicas. Como resultado del alto coste de almacenamiento y transmisión de imágenes médicas digitales, la compresión de datos juega un papel clave. JPEG2000 es el estado del arte en técnicas de compresión de imágenes para el almacenamiento y transmisión de imágenes médicas. Es el más reciente sistema de codificación incluido en DICOM y proporciona algunas características que son interesantes para la codificación de estas imágenes. JPEG2000 permite el uso de ventanas de interés, acceso a diferentes tamaños de la imagen o decodificar una región específica de ella. Esta tesis aborda tres problemas diferentes detectados en la codificación de CT. El primer problema de la codificación de estas imágenes, es el ruido que tienen. Este ruido es producido por el uso de unas dosis bajas de radiación durante la exploración, lo cual produce imágenes de baja calidad y penaliza el rendimiento de la codificación. El uso de diferentes filtros de ruido, hace mejorar la calidad y también aumentar el rendimiento de codificación. La segunda cuestión que se aborda en esta tesis, es el uso de transformaciones multicomponente en la codificación de las CT. Dependiendo de la correlación entre las diferentes imágenes que forman una CT, el rendimiento en la codificación usando estas transformaciones puede variar, incluso disminuir con respecto a JPEG2000. Final- mente, la última contribución de esta tesis trata sobre el paradigma de la codificación diagnóstica sin pérdida, y propone un nuevo método de segmentación. A través de la utilización de métodos de segmentación, para detectar el área biológica y descartar la zona no-biológica, JPEG2000 puede lograr mejoras de rendimiento de más de 2bpp. / Nowadays, thanks to the advances in medical science, there exist many different medical imaging techniques aimed at seeking to reveal, diagnose, or examine a disease. Many of these techniques produce very large amounts of data, especially from Computed Tomography (CT), Magnetic Res- onance Imaging (MRI) and Positron Emission Tomography (PET) modalities. To manage these data, medical centers use PACS and the DICOM standard to store, retrieve, distribute, and display medical images. As a result of the high cost of storage and transmission of medical digital images, data compression plays a key role. JPEG2000 is the state-of-the-art of image compression for the storage and transmission of med- ical images. It is the latest coding system included in DICOM and it provides some interesting capabilities for medical image coding. JPEG2000 enables the use of use of windows of interest, access different resolutions sizes of the image or decode an specific region of the image. This thesis deals with three different problems detected in CT image coding. The first coding problem is the noise that CT have. These noise is produced by the use of low radiation dose during the scan and it produces a low quality images and penalizes the coding performance. The use of different noise filters, enhance the quality and also increase the coding performance. The second question addressed in this dissertation is the use of multi-component transforms in Computed Tomography image coding. Depending on the correlation among the slices of a Computed Tomography, the coding performance of these transforms can vary even decrease with respect to JPEG2000. Finally, the last contribution deals with the diagnostically lossless coding paradigm, and it is proposed a new segmentation method. Through the use of segmentation methods to detect the biological area and to discard the non-biological area, JPEG2000 can achieve improvements of more than 2bpp.
73

Image segmentation evaluation and its application to object detection

Pont Tuset, Jordi 19 February 2014 (has links)
The first parts of this Thesis are focused on the study of the supervised evaluation of image segmentation algorithms. Supervised in the sense that the segmentation results are compared to a human-made annotation, known as ground truth, by means of different measures of similarity. The evaluation depends, therefore, on three main points. First, the image segmentation techniques we evaluate. We review the state of the art in image segmentation, making an explicit difference between those techniques that provide a flat output, that is, a single clustering of the set of pixels into regions; and those that produce a hierarchical segmentation, that is, a tree-like structure that represents regions at different scales from the details to the whole image. Second, ground-truth databases are of paramount importance in the evaluation. They can be divided into those annotated only at object level, that is, with marked sets of pixels that refer to objects that do not cover the whole image; or those with annotated full partitions, which provide a full clustering of all pixels in an image. Depending on the type of database, we say that the analysis is done from an object perspective or from a partition perspective. Finally, the similarity measures used to compare the generated results to the ground truth are what will provide us with a quantitative tool to evaluate whether our results are good, and in which way they can be improved. The main contributions of the first parts of the thesis are in the field of the similarity measures. First of all, from an object perspective, we review the used basic measures to compare two object representations and show that some of them are equivalent. In order to evaluate full partitions and hierarchies against an object, one needs to select which of their regions form the object to be assessed. We review and improve these techniques by means of a mathematical model of the problem. This analysis allows us to show that hierarchies can represent objects much better with much less number of regions than flat partitions. From a partition perspective, the literature about evaluation measures is large and entangled. Our first contribution is to review, structure, and deduplicate the measures available. We provide a new measure that we show that improves previous ones in terms of a set of qualitative and quantitative meta-measures. We also extend the measures on flat partitions to cover hierarchical segmentations. The second part of this Thesis moves from the evaluation of image segmentation to its application to object detection. In particular, we build on some of the conclusions extracted in the first part to generate segmented object candidates. Given a set of hierarchies, we build the pairs and triplets of regions, we learn to combine the set from each hierarchy, and we rank them using low-level and mid-level cues. We conduct an extensive experimental validation that show that our method outperforms the state of the art in many metrics tested.
74

Improving Memory Hierarchy Performance on MapReduce Frameworks for Multi-Core Architectures

de Souza Ferreira, Tharso 08 November 2013 (has links)
La necesidad de analizar grandes conjuntos de datos de diferentes tipos de aplicaciones ha popularizado el uso de modelos de programación simplicados como MapReduce. La popularidad actual se justifica por ser una abstracción útil para expresar procesamiento paralelo de datos y también ocultar eficazmente la sincronización de datos, tolerancia a fallos y la gestión de balanceo de carga para el desarrollador de la aplicación. Frameworks MapReduce también han sido adaptados a los sistema multi-core y de memoria compartida. Estos frameworks proponen que cada core de una CPU ejecute una tarea Map o Reduce de manera concurrente. Las fases Map y Reduce también comparten una estructura de datos común donde se aplica el procesamiento principal. En este trabajo se describen algunas limitaciones de los actuales frameworks para arquitecturas multi-core. En primer lugar, se describe la estructura de datos que se utiliza para mantener todo el archivo de entrada y datos intermedios en la memoria. Los frameworks actuales para arquitecturas multi-core han estado diseñado para mantener todos los datos intermedios en la memoria. Cuando se ejecutan aplicaciones con un gran conjunto de datos de entrada, la memoria disponible se convierte en demasiada pequeña para almacenar todos los datos intermedios del framework, presentando así una grave pérdida de rendimiento. Proponemos un subsistema de gestión de memoria que permite a las estructuras de datos procesar un número ilimitado de datos a través del uso de un mecanismo de spilling en el disco. También implementamos una forma de gestionar el acceso simultáneo al disco por todos los threads que realizan el procesamiento. Por último, se estudia la utilización eficaz de la jerarquía de memoria de los frameworks MapReduce y se propone una nueva implementación de una tarea MapReduce parcial para conjuntos de datos de entrada. El objetivo es hacer un buen uso de la caché, eliminando las referencias a los bloques de datos que ya no están en uso. Nuestra propuesta fue capaz de reducir significativamente el uso de la memoria principal y mejorar el rendimiento global con el aumento del uso de la memoria caché. / The need of analyzing large data sets from many different application fields has fostered the use of simplified programming models like MapReduce. Its current popularity is justified by being a useful abstraction to express data parallel processing and also by effectively hiding synchronization, fault tolerance and load balancing management details from the application developer. MapReduce frameworks have also been ported to multi-core and shared memory computer systems. These frameworks propose to dedicate a different computing CPU core for each map or reduce task to execute them concurrently. Also, Map and Reduce phases share a common data structure where main computations are applied. In this work we describe some limitations of current multi-core MapReduce frameworks. First, we describe the relevance of the data structure used to keep all input and intermediate data in memory. Current multi-core MapReduce frameworks are designed to keep all intermediate data in memory. When executing applications with large data input, the available memory becomes too small to store all framework intermediate data and there is a severe performance loss. We propose a memory management subsystem to allow intermediate data structures the processing of an unlimited amount of data by the use of a disk spilling mechanism. Also, we have implemented a way to manage concurrent access to disk of all threads participating in the computation. Finally, we have studied the effective use of the memory hierarchy by the data structures of the MapReduce frameworks and proposed a new implementation of partial MapReduce tasks to the input data set. The objective is to make a better use of the cache and to eliminate references to data blocks that are no longer in use. Our proposal was able to significantly reduce the main memory usage and improves the overall performance with the increasing of cache memory usage.
75

A framework for efficient execution of matrix computations

Herrero Zaragoza, Jose Ramón 07 July 2006 (has links)
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear systems of equations is a very frequent operation in many fields in science, engineering, surveying, physics and others. Other matrix operations occur frequently in many other fields such as pattern recognition and classification, or multimedia applications. Therefore, it is important to perform matrix operations efficiently. The work in this thesis focuses on the efficient execution on commodity processors of matrix operations which arise frequently in different fields.We study some important operations which appear in the solution of real world problems: some sparse and dense linear algebra codes and a classification algorithm. In particular, we focus our attention on the efficient execution of the following operations: sparse Cholesky factorization; dense matrix multiplication; dense Cholesky factorization; and Nearest Neighbor Classification.A lot of research has been conducted on the efficient parallelization of numerical algorithms. However, the efficiency of a parallel algorithm depends ultimately on the performance obtained from the computations performed on each node. The work presented in this thesis focuses on the sequential execution on a single processor.There exists a number of data structures for sparse computations which can be used in order to avoid the storage of and computation on zero elements. We work with a hierarchical data structure known as hypermatrix. A matrix is subdivided recursively an arbitrary number of times. Several pointer matrices are used to store the location ofsubmatrices at each level. The last level consists of data submatrices which are dealt with as dense submatrices. When the block size of this dense submatrices is small, the number of zeros can be greatly reduced. However, the performance obtained from BLAS3 routines drops heavily. Consequently, there is a trade-off in the size of data submatrices used for a sparse Cholesky factorization with the hypermatrix scheme. Our goal is that of reducing the overhead introduced by the unnecessary operation on zeros when a hypermatrix data structure is used to produce a sparse Cholesky factorization. In this work we study several techniques for reducing such overhead in order to obtain high performance.One of our goals is the creation of codes which work efficiently on different platforms when operating on dense matrices. To obtain high performance, the resources offered by the CPU must be properly utilized. At the same time, the memory hierarchy must be exploited to tolerate increasing memory latencies. To achieve the former, we produce inner kernels which use the CPU very efficiently. To achieve the latter, we investigate nonlinear data layouts. Such data formats can contribute to the effective use of the memory system.The use of highly optimized inner kernels is of paramount importance for obtaining efficient numerical algorithms. Often, such kernels are created by hand. However, we want to create efficient inner kernels for a variety of processors using a general approach and avoiding hand-made codification in assembly language. In this work, we present an alternative way to produce efficient kernels automatically, based on a set of simple codes written in a high level language, which can be parameterized at compilation time. The advantage of our method lies in the ability to generate very efficient inner kernels by means of a good compiler. Working on regular codes for small matrices most of the compilers we used in different platforms were creating very efficient inner kernels for matrix multiplication. Using the resulting kernels we have been able to produce high performance sparse and dense linear algebra codes on a variety of platforms.In this work we also show that techniques used in linear algebra codes can be useful in other fields. We present the work we have done in the optimization of the Nearest Neighbor classification focusing on the speed of the classification process.Tuning several codes for different problems and machines can become a heavy and unbearable task. For this reason we have developed an environment for development and automatic benchmarking of codes which is presented in this thesis.As a practical result of this work, we have been able to create efficient codes for several matrix operations on a variety of platforms. Our codes are highly competitive with other state-of-art codes for some problems.
76

The Jacobi identities for finite-dimensional Poisson structures: a P.D.E. based analysis of some new constructive results and solution families

Hernández Bermejo, Benito 17 April 2008 (has links)
Jacobi equations constitute a set of nonlinear partial differential equations which arise from the implementation in an arbitrary system of coordinates of a Poisson structure defined on a finite-dimensional smooth manifold. Certain skew-symmetric solutions of such equations are investigated in this dissertation. This is done from a twofold perspective including both the determination of new solution families as well as the construction of new global Darboux analyses of Poisson structures. The most general results investigated refer to the case of solutions of arbitrary dimension. The perspective thus obtained is of interest in view of the relatively modest number of solution families of this kind reported in the literature. In addition, the global Darboux analysis of structure matrices deals, in first place, with the global determination of complete sets of functionally independent distinguished invariants, thus providing a global description of the symplectic structure of phase space of any associated Poisson system; and secondly, with the constructive and global determination of the Darboux canonical form. Such kind of analysis is of interest because the construction of the Darboux coordinates is a task only known for a limited sample of Poisson structures and, in addition, the fact of globally performing such reduction improves the scope of Darboux' theorem, which only guarantees in principle the local existence of the Darboux coordinates. In this work, such reductions sometimes make use of time reparametrizations, thus in agreement with the usual definitions of system equivalence. In fact, time reparametrizations play a significant role in the understanding of the conditions under which the Darboux canonical form can be globally implemented, a question also investigated in detail in this dissertation. The implications of such results in connection with integrability issues are also considered in this context. The dissertation is structured as follows. Chapter 1 is devoted to the revision of diverse classical and well-known results that describe the basic framework of the investigation. The original contributions of the thesis are included in Chapters 2 to 4. Finally, the work ends in Chapter 5 with the presentation of some conclusions.
77

Complexity in Slowly-Driven Interaction-Dominated Threshold Systems: the Case of Rainfall

Deluca Silberberg, Anna 09 December 2013 (has links)
Molts processos geofísics presenten comportament emergent. Aquest sovint es manifesta com a regularitats estadístiques de gran escala com les distribucions de lleis de potències de certs observables dels corresponents sistemes. En aquesta tesi investiguem l’aparició d' aquestes regularitats, desenvolupant tècniques estadístiques per fer estimacions acurades dels paràmetres de les distribucions de lleis de potencies. El nostre mètode proporciona un criteri objectiu per escollir el domini on la distribució segueix una llei de potencies. L' apliquem per investigar temps de vida mitja d’elements radioactius, el moment sísmic de terratrèmols, l’energia dels ciclons tropicals, els incendis forestals, i els temps d’espera entre terratrèmols. En el cas de la pluja també s'han observat, per mesures a latituds mitjanes, lleis de potències per les mides dels esdeveniments. En aquest estudi, apliquem el mètode per investigar si aquestes observacions es poden reproduir per dades de diversos climes diferents. Els resultats són positius i constitueixen un indici més de què la convecció atmosfèrica i les precipitacions podrien ser un exemple, al món real, de la Criticalitat Auto-Organitzada (Self-Organised Criticality o SOC en anglès; un mecanisme que explica l'aparició de lleis de potències a la natura). També fem un anàlisi d'escala per tal d’observar el col·lapse de les distribucions. Tanmateix, el mètode no serveix per comprovar la presència d’universalitat, que és quelcom que s'espera observar en un sistema SOC. Per tant, hem desenvolupat un mètode basat en un test de permutació per tal de determinar si els exponents estimats són estadísticament compatibles. El nostre test permutacional alternatiu dóna resultats clars: tot i el fet que les diferències entre els exponents són més aviat petites, la presència d’universalitat queda descartada. El fet que la hipòtesi d’universalitat quedi rebutjada en aquests tests, no implica però que s’hagi de descartar l’existència d’un mecanisme universal per la convecció atmosfèrica, ja que les dades recol·lectades podrien presentar errors sistemàtics no controlats. Finalment, estudiem les conseqüències dels resultats anteriors en la predicció de fenòmens atmosfèrics. Analitzem l'efecte de posar llindars d'observació en models SOC i dades de pluja. La predictibilitat de fenòmens extrems i intensitats extremes s’estudia mitjançant una variable de decisió sensible a la tendència a formar “clusters” o a repel·lir-se dels esdeveniments. Avaluem la qualitat d'aquestes prediccions mitjançant el mètode anomenat Característica Operativa del Receptor. En l’escala d’esdeveniments (gran escala), els temps entre esdeveniments de pluja renormalitzen a un procés de puntual trivial, i llavors la predictibilitat decreix quan el llindar creix. El mateix comportament s'observa per series temporals de models SOC en els quals s'ha aplicat un llindar de detecció d'intensitats, però s'observa el comportament contrari quan aquest no s'aplica. En l'escala de les intensitats (curta escala), la predicció no es veu afectada pel llindar, donat que els processos roman gairebé inalterat (així també els exponents crítics corresponents) fins que llindars significativament elevats s’assoleixen. / Many geophysical phenomena present emergent behaviour, which manifested as large-scale statistical regularities such as power-law distributions for the coarse-grained observables of the corresponding systems. In this thesis we investigate the appearance of power-law distributions in geophysical phenomena. We develop a statistical technique for making accurate estimations of the parameters of power-law distributions. The method introduced, which gives an objective criteria to decide the power-law domain of the distribution, is applied to investigate the half-lives of radioactive elements, the seismic moment of earthquakes, the energy of tropical cyclones, the area burnt in forest fires and the waiting time between earthquakes. In addition, the method is applied for investigating the reproducibility of the observation of scale-free rain event avalanche distributions using data across diverse climates and for looking for signs of universality in the associated fitted exponents. Scaling techniques are also applied in order to see the collapse of the distributions. This study contributes to a recent array of statistical measures that give support to the hypothesis that atmospheric convection and precipitation may be a real-world example of Self-Organised Criticality (SOC, a mechanism able to reproduce the observed power laws). Another expectation of the SOC paradigm is universality, but the fitting method is not enough for checking this hypothesis. Therefore, a method based on a permutation test is developed in order to determine if the estimated exponents are statistically compatible. Our alternative permutational tests give clear results: despite the fact that the differences between the exponents are rather small, the universality hypothesis is rejected. However, the fact that the universality hypothesis is rejected in the tests does not mean that one has to rule out the existence of a universal mechanism for atmospheric convection, as uncontrolled systematic errors can be present in the collection of data. Finally, we study the consequences of the previous results for the prediction of atmospheric phenomena by analysing the effect of applying thresholds on SOC models and rainfall time series. The predictability of extreme events and extreme intensities is studied by means of a decision variable sensitive to the tendency to cluster or repulse between them and the quality of the predictions is evaluated by the receiver operating characteristics method. On the events scale (large scale), times between events for rainfall data and models renormalise to a trivial point process, and then the predictability decreases when the threshold increases. In the intensity picture (short scale), the prediction is not affected by the threshold, as the process remains mostly unchanged (also their critical corresponding exponents) until very high thresholds are reached.
78

Optimal signal recovery for pulsed balanced detection

Icaza Astiz, Yannik Alan de 27 January 2015 (has links)
To measure quantum features in a classical world constrains us to extend the classical technology to the limit, inventing and discovering new schemes to use the classical devices, while reducing and filtering the sources of noise. Balanced detectors, e.g. when measuring a low- noise laser, have become an exceptional tool to attain the shot-noise level, i.e., the standard quantum limit for measuring light. To detect light pulses at this level requires to decreasing and also to filtering all other sources of noise, namely electronic and technical noise. The aim of this work is to provide a new tool for filtering technical and electronic noises present in the pulses of light. It is especially relevant for signal processing methods in quantum optics experiments, as a means to achieve shot-noise level and reduce strong technical noise by means of a pattern function. We thus present the theoretical model for the pattern-function filtering, starting with a theoretical model of a balanced detector. Next, we indicate how to recover the signal from the output of the balanced detector and a noise model is proposed for the sources of noise and the conditions that should satisfy the filtering algorithm. Finally, the problem is solved and the pattern function is obtained, the one which solves the problem of filtering technical and electronic noises. Once the pattern function is obtained, we design an experimental setup to test and demonstrate this model-based technique. To accomplish this, we produce pulses of light using acousto-optics modulators, such light pulses are precisely characterized together with the detection system. The data are then analyzed using an oscilloscope which gathers all data in the time domain. The frequency-domain representation is calculated using mathematical functions. In this way, it is proved that our detector is shot-noise limited for continuous-wave light. Next, it is shown how the technical noise is produced in a controlled manner, and how to gather the necessary information for calculating the pattern function. Finally, the shot-noise-limited detection with pulses without technical noise introduced is shown first, and next, an experimental demonstration where 10 dB of technical noise is then filtered using the pattern function. The final part of this research is focused on the optimal signal recovery for pulsed polarimetry. We recall the Stokes parameters and how to estimate the polarization state from a signal. Next, we introduce a widely used signal processing technique, the Wiener filter. For the final step, we show how to retrieve, under the best conditions, the polarization-rotation angle with a signal that has 10 dB of technical noise. Obtaining that our technique outperforms the Wiener estimator and at the same time obtaining the standard quantum limit for phase/angle estimation. Because of the correlation between pulsed polarimetry and magnetic estimation using magnetic-atomic ensembles via Faraday effect, this pattern-function filtering technique can be readily used for probing magnetic-atomic ensembles in environments with strong technical noise. / Medir las características cuánticas en un mundo clásico no solo requiere llevar al límite la tecnología clásica, sino también, inventar y descubrir nuevos esquemas para utilizar los dispositivos clásicos, reduciendo y filtrando las fuentes de ruido. Los detectores balanceados, cuando miden un láser de bajo ruido, se han convertido en una herramienta excepcional para alcanzar el nivel del ruido de disparo, que es el límite estándar clásico para medir la luz. Detectar pulsos de luz al nivel de ruido de disparo requiere reducir y filtrar todas las otras fuentes de ruido, es decir, el ruido electrónico y el técnico. El objetivo de este trabajo es crear una nueva herramienta para filtrar ruido tanto técnico como electrónico de pulsos de luz, que es especialmente relevante para los métodos de procesamiento de señales en los experimentos de óptica cuántica, como una manera de alcanzar el nivel de ruido de disparo y reducir fuertemente el ruido técnico por medio una función patrón. Presentamos, por lo tanto, el modelo teórico para el filtrado por una función patrón. Primeramente damos el modelo teórico de un detector balanceado, luego exponemos cómo se recupera la señal de la salida del detector balanceado. A continuación proponemos un modelo para las fuentes de ruido y las condiciones que debe satisfacer el algoritmo de filtrado. Finalmente, se resuelve el problema y se obtiene la función patrón que nos permite filtrar los ruidos técnico y electrónico. Una vez que la función patrón se puede calcular, diseñamos un montaje experimental para probar y demostrar esta técnica basada en un modelo. Para tal propósito, producimos pulsos de luz usando moduladores acusto-ópticos que producen pulsos de luz que están precisamente caracterizados, junto con el sistema de detección. Los datos se analizan a continuación con un osciloscopio, reuniendo todos los datos en el dominio del tiempo. La representación del dominio de la frecuencia se calcula utilizando funciones matemáticas. De esta manera, se prueba que nuestro detector está limitado por el ruido de disparo para luz continua. Después, se muestra cómo se produce el ruido técnico de manera controlada, y cómo se reúne la información necesaria para calcular la función patrón. Finalmente, se muestra la detección limitada por el ruido de disparo para pulsos sin ruido técnico introducido primero, y luego, se hace una demostración experimental con 10 dB de ruido técnico, que se filtra a continuación usando la función patrón. La parte final de esta investigación está enfocada a la recuperación óptima de la señal para polarimetría pulsada. Recordamos los parámetros de Stokes y cómo estimar el estado de polarización de una señal. Luego, introducimos el filtro de Wiener, que es una técnica ampliamente usada en el procesamiento de señales. Para el paso final, mostramos cómo se recupera, bajo las mejores condiciones, el ángulo de rotación de polarización con una señal que tiene 10 dB de ruido técnico. Obteniendo el límite estándar cuántico para la estimación fase/ángulo y superando así el estimador de Wiener. Debido a la correlación entre polarimetría pulsada y la estimación magnética usando conjuntos atómicos magnéticos vía el efecto de Faraday, esta técnica de filtraje de función patrón puede ser fácilmente usada para sondear conjuntos atómico-magnéticos en ambientes con fuerte ruido técnico.
79

Lattices over polynomial rings and applications to function fields

Bauch, Jens-Dietrich 01 July 2014 (has links)
Esta tesis trata acerca de retículos sobre anillos de polinomios y sus aplicaciones a cuerpos de funciones algebraicas. En la primera parte consideramos la noción de retículos (L,| |) sobre anillos de polinomios, donde L es un módulo finitamente generado sobre k[t], el anillo de polinomios sobre el cuerpo k con la indeterminada t, y | | es una función real de longitud sobre el producto tensorial de L y k(t) sobre k[t]. Una base reducida de (L,| |) es una base de L, cuyos vectores alcanzan los mínimos sucesivos de (L,| |). Desarrollamos un algoritmo que transforma cualquier base de L en una base reducida de (L,| |) para una función real de longitud | | dada. Además generalizamos la teoría de Riemann-Roch para cuerpos de funciones algebraicas al contexto de retículos sobre k[t]. En la segunda parte aplicamos los resultados previos a cuerpos de funciones algebraicas. Para un divisor D de un cuerpo de funciones algebraicas F/k desarrollamos un algoritmo para la computación de su espacio de Riemann-Roch y los mínimos sucesivos asociados al retículo (I,| |), donde I es un ideal fraccional (obtenido por la representación ideal de D) del orden maximal finito O de F y | | es una función de longitud sobre F. Sea K el cuerpo de constantes de F/k. Entonces podemos expresar el género de F en términos de [K : k] e índices de unos órdenes del orden maximal finito e infinito de F. Cuando k es un cuerpo finito, el algoritmo de Montes calcula esos índices como un subproducto. Esto proporciona un método rápido para el cálculo del género de un cuerpo de funciones algebraicas. Nuestro algoritmo no requiere el cálculo de ninguna base, ni del orden maximal finito, ni del infinito. Sea A la localización de k[1/t] en el ideal primo generado por 1/t. El concepto de reducción y la representación OM de ideales primos nos lleva, en este contexto, a un método nuevo para el cálculo de una k[t]-base de un ideal fraccional de O y una A-base de un ideal fraccional del orden maximal infinito de F respectivamente. En la última parte aplicamos nuestros algoritmos a una gran variedad de ejemplos relevantes para ilustrar su eficiencia en comparación con las rutinas clásicas. / This thesis deals with lattices over polynomial rings and its applications to algebraic function fields. In the first part, we consider the notion of lattices (L,| |) over polynomial rings, where L is a finitely generated module over k[t], the polynomial ring over the field k in the indeterminate t, and | | is a real-valued length function on the tensor product of L and k(t) over k[t]. A reduced basis of (L,| |) is a basis of L whose vectors attain the successive minima of (L,| |). We develop an algorithm which transforms any basis of L into a reduced basis of (L,| |), for a given real-valued length function | |. Moreover, we generalize the Riemann-Roch theory for algebraic function fields to the context of lattices over k[t]. In the second part, we apply the previous results to algebraic function fields. For a divisor D of an algebraic function field F/k, we develop an algorithm for the computation of its Riemann-Roch space and the successive minima attached to the lattice (I ,| | ), where I is a fractional ideal (obtained from the ideal representation of D) of the finite maximal order O of F and | | is a certain length function on F. Let K be the full constant field of F/k. Then, we can express the genus g of F in terms of [K : k] and the indices of certain orders of the finite and infinite maximal orders of F. If k is a finite field, the Montes algorithm computes the latter indices as a by-product. This leads us to a fast computation of the genus of global function fields. Our algorithm does not require the computation of any basis, neither of the finite nor the infinite maximal order. Let A be the localization of k[1/t] at the prime ideal generated by 1/t. The concept of reduceness and the OM representations of prime ideals lead us in that context to a new method for the computation of k[t]-bases of fractional ideals of O and A-bases of fractional ideals of the infinite maximal order of F, respectively. In the last part, our algorithms are applied to a large number of relevant examples to illustrate its performance in comparison with the classical routines.
80

Diseño y aplicación de herramientas tecnológicas aplicadas a la identificación de elementos diferenciales del estilo compositivo de autores

Tudurí Vila, Antonio 31 May 2013 (has links)
No description available.

Page generated in 0.3033 seconds